Test Report: Hyperkit_macOS 19651

                    
                      f000a69778791892f7d89fef6358d7150d12a198:2024-09-16:36236
                    
                

Test fail (31/312)

Order failed test Duration
22 TestOffline 195.3
27 TestAddons/Setup 77.05
28 TestCertOptions 251.78
29 TestCertExpiration 1710.7
30 TestDockerFlags 252.27
31 TestForceSystemdFlag 252.09
32 TestForceSystemdEnv 234.87
142 TestMultiControlPlane/serial/StartCluster 76.98
143 TestMultiControlPlane/serial/DeployApp 81.75
144 TestMultiControlPlane/serial/PingHostFromPods 0.25
145 TestMultiControlPlane/serial/AddWorkerNode 0.34
146 TestMultiControlPlane/serial/NodeLabels 0.18
147 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.31
148 TestMultiControlPlane/serial/CopyFile 0.3
149 TestMultiControlPlane/serial/StopSecondaryNode 0.45
150 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.31
151 TestMultiControlPlane/serial/RestartSecondaryNode 53.81
152 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.31
154 TestMultiControlPlane/serial/DeleteSecondaryNode 3.16
155 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.88
156 TestMultiControlPlane/serial/StopCluster 8.53
157 TestMultiControlPlane/serial/RestartCluster 63.88
158 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.66
159 TestMultiControlPlane/serial/AddSecondaryNode 338.12
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 14.53
203 TestMountStart/serial/StartWithMountFirst 136.64
225 TestScheduledStopUnix 141.95
249 TestPause/serial/Start 139.72
355 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.72
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.34
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 59.93
x
+
TestOffline (195.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-384000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-384000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.900958123s)

                                                
                                                
-- stdout --
	* [offline-docker-384000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-384000" primary control-plane node in "offline-docker-384000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-384000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:02:56.108822    5339 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:02:56.109162    5339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:02:56.109167    5339 out.go:358] Setting ErrFile to fd 2...
	I0916 04:02:56.109171    5339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:02:56.109382    5339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 04:02:56.111188    5339 out.go:352] Setting JSON to false
	I0916 04:02:56.136651    5339 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3746,"bootTime":1726480830,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 04:02:56.136841    5339 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:02:56.207269    5339 out.go:177] * [offline-docker-384000] minikube v1.34.0 on Darwin 14.6.1
	I0916 04:02:56.269109    5339 notify.go:220] Checking for updates...
	I0916 04:02:56.291115    5339 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:02:56.311340    5339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 04:02:56.333065    5339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 04:02:56.359373    5339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:02:56.380274    5339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:02:56.422351    5339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:02:56.443356    5339 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:02:56.471605    5339 out.go:177] * Using the hyperkit driver based on user configuration
	I0916 04:02:56.513397    5339 start.go:297] selected driver: hyperkit
	I0916 04:02:56.513426    5339 start.go:901] validating driver "hyperkit" against <nil>
	I0916 04:02:56.513444    5339 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:02:56.517419    5339 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:02:56.517540    5339 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 04:02:56.525791    5339 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 04:02:56.529391    5339 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:02:56.529409    5339 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 04:02:56.529442    5339 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:02:56.529678    5339 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:02:56.529714    5339 cni.go:84] Creating CNI manager for ""
	I0916 04:02:56.529752    5339 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:02:56.529758    5339 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:02:56.529825    5339 start.go:340] cluster config:
	{Name:offline-docker-384000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:02:56.529904    5339 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:02:56.577315    5339 out.go:177] * Starting "offline-docker-384000" primary control-plane node in "offline-docker-384000" cluster
	I0916 04:02:56.598643    5339 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:02:56.598724    5339 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 04:02:56.598758    5339 cache.go:56] Caching tarball of preloaded images
	I0916 04:02:56.598975    5339 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 04:02:56.598999    5339 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:02:56.599530    5339 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/offline-docker-384000/config.json ...
	I0916 04:02:56.599580    5339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/offline-docker-384000/config.json: {Name:mkf6e1ca643c7339e3d850efea19c4aaadb2d916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:02:56.600244    5339 start.go:360] acquireMachinesLock for offline-docker-384000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:02:56.600381    5339 start.go:364] duration metric: took 101.688µs to acquireMachinesLock for "offline-docker-384000"
	I0916 04:02:56.600429    5339 start.go:93] Provisioning new machine with config: &{Name:offline-docker-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:02:56.600565    5339 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:02:56.621358    5339 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:02:56.621513    5339 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:02:56.621549    5339 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:02:56.630486    5339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52640
	I0916 04:02:56.630918    5339 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:02:56.631457    5339 main.go:141] libmachine: Using API Version  1
	I0916 04:02:56.631472    5339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:02:56.631701    5339 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:02:56.631825    5339 main.go:141] libmachine: (offline-docker-384000) Calling .GetMachineName
	I0916 04:02:56.631934    5339 main.go:141] libmachine: (offline-docker-384000) Calling .DriverName
	I0916 04:02:56.632046    5339 start.go:159] libmachine.API.Create for "offline-docker-384000" (driver="hyperkit")
	I0916 04:02:56.632068    5339 client.go:168] LocalClient.Create starting
	I0916 04:02:56.632101    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:02:56.632161    5339 main.go:141] libmachine: Decoding PEM data...
	I0916 04:02:56.632173    5339 main.go:141] libmachine: Parsing certificate...
	I0916 04:02:56.632246    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:02:56.632285    5339 main.go:141] libmachine: Decoding PEM data...
	I0916 04:02:56.632298    5339 main.go:141] libmachine: Parsing certificate...
	I0916 04:02:56.632312    5339 main.go:141] libmachine: Running pre-create checks...
	I0916 04:02:56.632322    5339 main.go:141] libmachine: (offline-docker-384000) Calling .PreCreateCheck
	I0916 04:02:56.632433    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:02:56.632660    5339 main.go:141] libmachine: (offline-docker-384000) Calling .GetConfigRaw
	I0916 04:02:56.642376    5339 main.go:141] libmachine: Creating machine...
	I0916 04:02:56.642388    5339 main.go:141] libmachine: (offline-docker-384000) Calling .Create
	I0916 04:02:56.642520    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:02:56.642642    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:02:56.642516    5361 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:02:56.642703    5339 main.go:141] libmachine: (offline-docker-384000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:02:57.112820    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:02:57.112713    5361 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/id_rsa...
	I0916 04:02:57.152163    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:02:57.152116    5361 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/offline-docker-384000.rawdisk...
	I0916 04:02:57.152177    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Writing magic tar header
	I0916 04:02:57.152188    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Writing SSH key tar header
	I0916 04:02:57.189119    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:02:57.189024    5361 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000 ...
	I0916 04:02:57.627620    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:02:57.627642    5339 main.go:141] libmachine: (offline-docker-384000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/hyperkit.pid
	I0916 04:02:57.627652    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Using UUID 529837b5-7531-4b3e-ac4d-be9c6c59b079
	I0916 04:02:57.956560    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Generated MAC ee:2a:29:ea:a2:a6
	I0916 04:02:57.956577    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-384000
	I0916 04:02:57.956607    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"529837b5-7531-4b3e-ac4d-be9c6c59b079", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0916 04:02:57.956638    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"529837b5-7531-4b3e-ac4d-be9c6c59b079", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0916 04:02:57.956734    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "529837b5-7531-4b3e-ac4d-be9c6c59b079", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/offline-docker-384000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage,
/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-384000"}
	I0916 04:02:57.956804    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 529837b5-7531-4b3e-ac4d-be9c6c59b079 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/offline-docker-384000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machi
nes/offline-docker-384000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-384000"
	I0916 04:02:57.956820    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:02:57.959720    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:57 DEBUG: hyperkit: Pid is 5386
	I0916 04:02:57.960936    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 0
	I0916 04:02:57.960951    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:02:57.961039    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:02:57.962103    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:02:57.962204    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:02:57.962219    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:02:57.962233    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:02:57.962247    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:02:57.962264    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:02:57.962276    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:02:57.962318    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:02:57.962344    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:02:57.962354    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:02:57.962361    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:02:57.962368    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:02:57.962383    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:02:57.962388    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:02:57.962400    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:02:57.962409    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:02:57.962417    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:02:57.967410    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:02:58.020478    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:02:58.040218    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:02:58.040241    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:02:58.040248    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:02:58.040257    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:02:58.416215    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:02:58.416230    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:02:58.531136    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:02:58.531182    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:02:58.531194    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:02:58.531200    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:02:58.531995    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:02:58.532007    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:02:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:02:59.963515    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 1
	I0916 04:02:59.963536    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:02:59.963638    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:02:59.964447    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:02:59.964479    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:02:59.964490    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:02:59.964498    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:02:59.964506    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:02:59.964513    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:02:59.964520    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:02:59.964526    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:02:59.964534    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:02:59.964541    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:02:59.964548    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:02:59.964555    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:02:59.964571    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:02:59.964583    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:02:59.964591    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:02:59.964597    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:02:59.964605    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:01.966675    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 2
	I0916 04:03:01.966688    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:01.966698    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:01.967524    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:01.967570    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:01.967589    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:01.967598    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:01.967605    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:01.967612    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:01.967622    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:01.967632    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:01.967639    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:01.967647    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:01.967667    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:01.967677    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:01.967684    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:01.967692    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:01.967698    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:01.967712    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:01.967721    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:03.916219    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:03:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 04:03:03.916363    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:03:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 04:03:03.916372    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:03:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 04:03:03.936300    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:03:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 04:03:03.969900    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 3
	I0916 04:03:03.969921    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:03.970087    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:03.971515    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:03.971593    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:03.971609    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:03.971624    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:03.971637    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:03.971648    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:03.971659    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:03.971692    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:03.971703    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:03.971737    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:03.971761    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:03.971776    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:03.971789    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:03.971804    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:03.971819    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:03.971856    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:03.971879    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:05.973037    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 4
	I0916 04:03:05.973054    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:05.973157    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:05.973957    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:05.974030    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:05.974045    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:05.974071    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:05.974085    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:05.974092    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:05.974101    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:05.974127    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:05.974140    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:05.974153    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:05.974161    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:05.974168    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:05.974175    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:05.974185    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:05.974200    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:05.974209    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:05.974220    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:07.976255    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 5
	I0916 04:03:07.976270    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:07.976347    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:07.977126    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:07.977174    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:07.977187    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:07.977198    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:07.977207    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:07.977217    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:07.977225    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:07.977245    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:07.977254    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:07.977261    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:07.977268    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:07.977284    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:07.977296    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:07.977311    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:07.977324    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:07.977332    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:07.977339    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:09.978174    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 6
	I0916 04:03:09.978195    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:09.978240    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:09.979026    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:09.979078    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:09.979092    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:09.979100    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:09.979109    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:09.979122    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:09.979129    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:09.979140    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:09.979146    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:09.979153    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:09.979159    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:09.979165    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:09.979173    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:09.979183    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:09.979190    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:09.979195    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:09.979212    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:11.981304    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 7
	I0916 04:03:11.981319    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:11.981408    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:11.982176    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:11.982214    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:11.982236    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:11.982249    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:11.982262    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:11.982273    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:11.982294    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:11.982302    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:11.982308    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:11.982316    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:11.982331    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:11.982339    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:11.982347    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:11.982355    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:11.982362    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:11.982367    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:11.982395    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:13.983388    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 8
	I0916 04:03:13.983400    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:13.983458    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:13.984246    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:13.984293    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:13.984303    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:13.984315    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:13.984321    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:13.984328    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:13.984334    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:13.984349    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:13.984357    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:13.984364    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:13.984372    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:13.984378    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:13.984386    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:13.984394    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:13.984401    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:13.984408    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:13.984414    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:15.986487    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 9
	I0916 04:03:15.986504    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:15.986589    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:15.987379    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:15.987453    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:15.987471    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:15.987485    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:15.987492    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:15.987511    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:15.987522    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:15.987529    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:15.987535    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:15.987550    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:15.987557    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:15.987566    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:15.987573    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:15.987580    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:15.987587    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:15.987606    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:15.987618    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:17.989660    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 10
	I0916 04:03:17.989675    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:17.989734    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:17.990505    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:17.990555    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:17.990563    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:17.990571    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:17.990577    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:17.990592    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:17.990602    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:17.990609    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:17.990617    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:17.990630    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:17.990642    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:17.990653    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:17.990661    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:17.990673    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:17.990695    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:17.990702    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:17.990709    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:19.991220    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 11
	I0916 04:03:19.991232    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:19.991298    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:19.992073    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:19.992113    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:19.992122    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:19.992130    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:19.992136    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:19.992142    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:19.992147    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:19.992161    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:19.992172    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:19.992187    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:19.992198    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:19.992206    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:19.992214    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:19.992227    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:19.992235    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:19.992244    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:19.992252    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:21.994327    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 12
	I0916 04:03:21.994340    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:21.994387    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:21.995186    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:21.995221    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:21.995231    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:21.995240    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:21.995250    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:21.995258    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:21.995265    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:21.995273    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:21.995279    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:21.995286    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:21.995291    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:21.995299    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:21.995304    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:21.995317    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:21.995331    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:21.995338    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:21.995346    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:23.996653    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 13
	I0916 04:03:23.996668    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:23.996725    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:23.997500    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:23.997568    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:23.997589    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:23.997602    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:23.997627    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:23.997639    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:23.997648    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:23.997666    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:23.997678    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:23.997685    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:23.997701    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:23.997708    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:23.997716    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:23.997722    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:23.997728    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:23.997735    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:23.997745    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:25.998028    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 14
	I0916 04:03:25.998043    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:25.998110    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:25.998900    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:25.998912    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:25.998918    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:25.998924    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:25.998929    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:25.998944    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:25.998949    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:25.998957    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:25.998962    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:25.998968    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:25.998974    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:25.998990    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:25.999002    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:25.999011    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:25.999019    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:25.999028    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:25.999036    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:27.999814    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 15
	I0916 04:03:27.999829    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:27.999903    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:28.000671    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:28.000733    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:28.000741    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:28.000749    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:28.000754    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:28.000765    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:28.000777    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:28.000785    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:28.000793    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:28.000808    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:28.000820    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:28.000829    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:28.000837    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:28.000844    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:28.000854    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:28.000862    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:28.000870    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:30.002944    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 16
	I0916 04:03:30.002958    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:30.003011    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:30.003784    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:30.003807    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:30.003815    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:30.003832    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:30.003839    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:30.003846    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:30.003853    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:30.003860    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:30.003867    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:30.003880    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:30.003889    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:30.003896    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:30.003902    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:30.003909    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:30.003919    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:30.003927    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:30.003935    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:32.005979    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 17
	I0916 04:03:32.005994    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:32.006042    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:32.006826    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:32.006886    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:32.006895    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:32.006902    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:32.006907    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:32.006920    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:32.006928    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:32.006946    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:32.006958    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:32.006966    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:32.006982    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:32.006990    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:32.006996    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:32.007004    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:32.007017    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:32.007029    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:32.007044    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:34.007256    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 18
	I0916 04:03:34.007274    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:34.007372    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:34.008151    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:34.008209    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:34.008222    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:34.008231    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:34.008237    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:34.008246    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:34.008255    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:34.008271    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:34.008283    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:34.008305    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:34.008318    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:34.008326    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:34.008331    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:34.008340    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:34.008347    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:34.008359    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:34.008370    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:36.009783    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 19
	I0916 04:03:36.009822    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:36.009836    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:36.010630    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:36.010674    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:36.010684    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:36.010704    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:36.010716    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:36.010728    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:36.010734    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:36.010740    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:36.010755    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:36.010768    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:36.010779    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:36.010786    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:36.010802    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:36.010815    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:36.010826    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:36.010839    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:36.010849    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:38.012831    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 20
	I0916 04:03:38.012846    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:38.012891    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:38.013647    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:38.013686    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:38.013696    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:38.013714    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:38.013720    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:38.013726    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:38.013731    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:38.013745    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:38.013755    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:38.013761    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:38.013768    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:38.013774    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:38.013782    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:38.013796    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:38.013805    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:38.013812    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:38.013819    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:40.015881    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 21
	I0916 04:03:40.015893    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:40.016011    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:40.016925    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:40.016961    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:40.016970    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:40.016991    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:40.017006    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:40.017014    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:40.017020    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:40.017028    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:40.017044    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:40.017053    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:40.017061    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:40.017068    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:40.017075    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:40.017082    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:40.017088    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:40.017095    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:40.017101    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:42.019125    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 22
	I0916 04:03:42.019136    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:42.019204    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:42.019985    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:42.020014    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:42.020023    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:42.020030    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:42.020039    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:42.020047    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:42.020054    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:42.020068    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:42.020074    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:42.020081    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:42.020087    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:42.020100    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:42.020111    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:42.020119    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:42.020127    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:42.020142    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:42.020153    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:44.020412    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 23
	I0916 04:03:44.020426    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:44.020494    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:44.021283    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:44.021324    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:44.021334    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:44.021342    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:44.021349    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:44.021365    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:44.021377    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:44.021385    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:44.021391    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:44.021408    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:44.021416    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:44.021430    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:44.021451    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:44.021463    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:44.021477    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:44.021485    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:44.021495    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:46.022578    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 24
	I0916 04:03:46.022594    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:46.022639    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:46.023402    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:46.023419    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:46.023436    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:46.023447    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:46.023453    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:46.023460    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:46.023468    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:46.023479    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:46.023498    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:46.023508    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:46.023519    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:46.023528    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:46.023539    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:46.023547    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:46.023553    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:46.023559    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:46.023564    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:48.025148    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 25
	I0916 04:03:48.025163    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:48.025222    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:48.026015    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:48.026035    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:48.026061    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:48.026069    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:48.026077    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:48.026086    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:48.026094    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:48.026101    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:48.026108    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:48.026114    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:48.026121    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:48.026133    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:48.026140    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:48.026147    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:48.026163    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:48.026175    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:48.026185    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:50.027596    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 26
	I0916 04:03:50.027609    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:50.027676    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:50.028468    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:50.028520    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:50.028535    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:50.028546    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:50.028555    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:50.028562    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:50.028570    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:50.028585    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:50.028612    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:50.028619    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:50.028625    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:50.028631    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:50.028641    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:50.028650    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:50.028656    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:50.028663    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:50.028682    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:52.029536    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 27
	I0916 04:03:52.029552    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:52.029617    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:52.030350    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:52.030387    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:52.030398    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:52.030420    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:52.030434    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:52.030448    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:52.030457    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:52.030464    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:52.030471    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:52.030478    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:52.030483    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:52.030490    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:52.030497    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:52.030504    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:52.030510    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:52.030537    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:52.030545    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:54.031725    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 28
	I0916 04:03:54.031740    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:54.031793    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:54.032571    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:54.032633    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:54.032644    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:54.032651    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:54.032656    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:54.032661    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:54.032666    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:54.032674    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:54.032690    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:54.032697    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:54.032706    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:54.032712    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:54.032718    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:54.032730    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:54.032737    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:54.032744    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:54.032759    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:56.034857    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 29
	I0916 04:03:56.034877    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:03:56.034936    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:03:56.035758    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for ee:2a:29:ea:a2:a6 in /var/db/dhcpd_leases ...
	I0916 04:03:56.035816    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:03:56.035826    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:03:56.035837    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:03:56.035848    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:03:56.035854    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:03:56.035860    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:03:56.035866    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:03:56.035872    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:03:56.035894    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:03:56.035905    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:03:56.035913    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:03:56.035921    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:03:56.035927    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:03:56.035935    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:03:56.035952    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:03:56.035960    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:03:58.036313    5339 client.go:171] duration metric: took 1m1.403373576s to LocalClient.Create
	I0916 04:04:00.038471    5339 start.go:128] duration metric: took 1m3.437005568s to createHost
	I0916 04:04:00.038497    5339 start.go:83] releasing machines lock for "offline-docker-384000", held for 1m3.437217514s
	W0916 04:04:00.038514    5339 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:2a:29:ea:a2:a6
	I0916 04:04:00.038844    5339 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:04:00.038874    5339 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:04:00.047874    5339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52676
	I0916 04:04:00.048218    5339 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:04:00.048562    5339 main.go:141] libmachine: Using API Version  1
	I0916 04:04:00.048572    5339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:04:00.048785    5339 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:04:00.049146    5339 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:04:00.049174    5339 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:04:00.057555    5339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52678
	I0916 04:04:00.057894    5339 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:04:00.058253    5339 main.go:141] libmachine: Using API Version  1
	I0916 04:04:00.058274    5339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:04:00.058513    5339 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:04:00.058648    5339 main.go:141] libmachine: (offline-docker-384000) Calling .GetState
	I0916 04:04:00.058739    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.058813    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:04:00.059755    5339 main.go:141] libmachine: (offline-docker-384000) Calling .DriverName
	I0916 04:04:00.080842    5339 out.go:177] * Deleting "offline-docker-384000" in hyperkit ...
	I0916 04:04:00.101890    5339 main.go:141] libmachine: (offline-docker-384000) Calling .Remove
	I0916 04:04:00.102041    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.102055    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.102108    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:04:00.103045    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.103109    5339 main.go:141] libmachine: (offline-docker-384000) DBG | waiting for graceful shutdown
	I0916 04:04:01.105243    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:01.105311    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:04:01.106217    5339 main.go:141] libmachine: (offline-docker-384000) DBG | waiting for graceful shutdown
	I0916 04:04:02.106374    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:02.106491    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:04:02.108161    5339 main.go:141] libmachine: (offline-docker-384000) DBG | waiting for graceful shutdown
	I0916 04:04:03.109316    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:03.109425    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:04:03.110067    5339 main.go:141] libmachine: (offline-docker-384000) DBG | waiting for graceful shutdown
	I0916 04:04:04.111087    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:04.111169    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:04:04.111752    5339 main.go:141] libmachine: (offline-docker-384000) DBG | waiting for graceful shutdown
	I0916 04:04:05.112356    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:05.112419    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5386
	I0916 04:04:05.113430    5339 main.go:141] libmachine: (offline-docker-384000) DBG | sending sigkill
	I0916 04:04:05.113439    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0916 04:04:05.122339    5339 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:2a:29:ea:a2:a6
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ee:2a:29:ea:a2:a6
	I0916 04:04:05.122352    5339 start.go:729] Will try again in 5 seconds ...
	I0916 04:04:05.152935    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:04:05 WARN : hyperkit: failed to read stderr: EOF
	I0916 04:04:05.152969    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:04:05 WARN : hyperkit: failed to read stdout: EOF
	I0916 04:04:10.124502    5339 start.go:360] acquireMachinesLock for offline-docker-384000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:05:02.913153    5339 start.go:364] duration metric: took 52.787877043s to acquireMachinesLock for "offline-docker-384000"
	I0916 04:05:02.913193    5339 start.go:93] Provisioning new machine with config: &{Name:offline-docker-384000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-384000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:05:02.913242    5339 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:05:02.934843    5339 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:05:02.934922    5339 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:05:02.934949    5339 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:05:02.943377    5339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52686
	I0916 04:05:02.943708    5339 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:05:02.944037    5339 main.go:141] libmachine: Using API Version  1
	I0916 04:05:02.944047    5339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:05:02.944306    5339 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:05:02.944431    5339 main.go:141] libmachine: (offline-docker-384000) Calling .GetMachineName
	I0916 04:05:02.944518    5339 main.go:141] libmachine: (offline-docker-384000) Calling .DriverName
	I0916 04:05:02.944626    5339 start.go:159] libmachine.API.Create for "offline-docker-384000" (driver="hyperkit")
	I0916 04:05:02.944642    5339 client.go:168] LocalClient.Create starting
	I0916 04:05:02.944666    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:05:02.944718    5339 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:02.944727    5339 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:02.944764    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:05:02.944800    5339 main.go:141] libmachine: Decoding PEM data...
	I0916 04:05:02.944813    5339 main.go:141] libmachine: Parsing certificate...
	I0916 04:05:02.944826    5339 main.go:141] libmachine: Running pre-create checks...
	I0916 04:05:02.944831    5339 main.go:141] libmachine: (offline-docker-384000) Calling .PreCreateCheck
	I0916 04:05:02.944909    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:02.944940    5339 main.go:141] libmachine: (offline-docker-384000) Calling .GetConfigRaw
	I0916 04:05:02.976731    5339 main.go:141] libmachine: Creating machine...
	I0916 04:05:02.976739    5339 main.go:141] libmachine: (offline-docker-384000) Calling .Create
	I0916 04:05:02.976828    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:02.976962    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:05:02.976821    5578 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:05:02.977004    5339 main.go:141] libmachine: (offline-docker-384000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:05:03.180772    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:05:03.180670    5578 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/id_rsa...
	I0916 04:05:03.238578    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:05:03.238516    5578 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/offline-docker-384000.rawdisk...
	I0916 04:05:03.238586    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Writing magic tar header
	I0916 04:05:03.238593    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Writing SSH key tar header
	I0916 04:05:03.239188    5339 main.go:141] libmachine: (offline-docker-384000) DBG | I0916 04:05:03.239129    5578 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000 ...
	I0916 04:05:03.614795    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:03.614813    5339 main.go:141] libmachine: (offline-docker-384000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/hyperkit.pid
	I0916 04:05:03.614830    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Using UUID f04eb037-ef03-4c16-be49-ba4814c0643e
	I0916 04:05:03.640740    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Generated MAC 76:49:54:7c:db:f1
	I0916 04:05:03.640757    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-384000
	I0916 04:05:03.640794    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f04eb037-ef03-4c16-be49-ba4814c0643e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0916 04:05:03.640822    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f04eb037-ef03-4c16-be49-ba4814c0643e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0916 04:05:03.640870    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f04eb037-ef03-4c16-be49-ba4814c0643e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/offline-docker-384000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage,
/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-384000"}
	I0916 04:05:03.640907    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f04eb037-ef03-4c16-be49-ba4814c0643e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/offline-docker-384000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machi
nes/offline-docker-384000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-384000"
	I0916 04:05:03.640955    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:05:03.643817    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 DEBUG: hyperkit: Pid is 5579
	I0916 04:05:03.645073    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 0
	I0916 04:05:03.645093    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:03.645146    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:03.646083    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:03.646151    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:03.646171    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:03.646209    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:03.646225    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:03.646238    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:03.646250    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:03.646275    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:03.646291    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:03.646303    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:03.646316    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:03.646333    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:03.646340    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:03.646347    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:03.646356    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:03.646381    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:03.646400    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:03.651639    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:05:03.659698    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/offline-docker-384000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:05:03.660696    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:05:03.660725    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:05:03.660738    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:05:03.660749    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:05:04.034857    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:05:04.034868    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:05:04.149511    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:05:04.149529    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:05:04.149543    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:05:04.149551    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:05:04.150420    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:05:04.150434    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:05:05.646824    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 1
	I0916 04:05:05.646837    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:05.646902    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:05.647733    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:05.647751    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:05.647769    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:05.647787    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:05.647799    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:05.647807    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:05.647815    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:05.647831    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:05.647840    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:05.647847    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:05.647853    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:05.647865    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:05.647873    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:05.647887    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:05.647896    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:05.647904    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:05.647911    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:07.648868    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 2
	I0916 04:05:07.648881    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:07.649002    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:07.649799    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:07.649861    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:07.649872    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:07.649881    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:07.649888    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:07.649895    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:07.649901    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:07.649908    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:07.649914    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:07.649920    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:07.649926    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:07.649932    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:07.649938    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:07.649944    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:07.649950    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:07.649955    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:07.649960    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:09.509028    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 04:05:09.509187    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 04:05:09.509199    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 04:05:09.529036    5339 main.go:141] libmachine: (offline-docker-384000) DBG | 2024/09/16 04:05:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 04:05:09.652186    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 3
	I0916 04:05:09.652211    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:09.652413    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:09.653859    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:09.654007    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:09.654026    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:09.654042    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:09.654058    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:09.654072    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:09.654092    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:09.654107    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:09.654177    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:09.654193    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:09.654201    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:09.654210    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:09.654221    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:09.654248    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:09.654259    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:09.654279    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:09.654303    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:11.654412    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 4
	I0916 04:05:11.654425    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:11.654520    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:11.655307    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:11.655353    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:11.655364    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:11.655378    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:11.655402    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:11.655412    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:11.655419    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:11.655428    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:11.655435    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:11.655443    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:11.655449    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:11.655462    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:11.655470    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:11.655479    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:11.655484    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:11.655490    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:11.655498    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:13.657285    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 5
	I0916 04:05:13.657296    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:13.657349    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:13.658159    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:13.658201    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:13.658211    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:13.658223    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:13.658230    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:13.658236    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:13.658242    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:13.658250    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:13.658261    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:13.658271    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:13.658279    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:13.658287    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:13.658295    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:13.658304    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:13.658311    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:13.658318    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:13.658334    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:15.660402    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 6
	I0916 04:05:15.660416    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:15.660492    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:15.661328    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:15.661388    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:15.661398    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:15.661406    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:15.661413    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:15.661426    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:15.661433    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:15.661439    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:15.661444    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:15.661451    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:15.661458    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:15.661469    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:15.661479    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:15.661490    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:15.661497    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:15.661522    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:15.661536    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:17.662460    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 7
	I0916 04:05:17.662473    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:17.662590    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:17.663405    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:17.663459    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:17.663472    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:17.663481    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:17.663512    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:17.663525    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:17.663534    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:17.663542    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:17.663549    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:17.663557    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:17.663566    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:17.663572    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:17.663581    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:17.663596    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:17.663615    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:17.663627    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:17.663635    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:19.664884    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 8
	I0916 04:05:19.664897    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:19.664950    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:19.665730    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:19.665764    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:19.665785    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:19.665797    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:19.665807    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:19.665813    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:19.665824    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:19.665833    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:19.665840    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:19.665846    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:19.665851    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:19.665857    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:19.665863    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:19.665869    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:19.665878    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:19.665885    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:19.665894    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:21.667400    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 9
	I0916 04:05:21.667414    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:21.667476    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:21.668262    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:21.668318    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:21.668328    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:21.668336    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:21.668342    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:21.668348    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:21.668355    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:21.668361    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:21.668367    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:21.668373    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:21.668380    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:21.668387    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:21.668396    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:21.668410    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:21.668422    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:21.668430    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:21.668438    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:23.668754    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 10
	I0916 04:05:23.668768    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:23.668835    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:23.669890    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:23.669922    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:23.669933    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:23.669949    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:23.669962    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:23.669971    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:23.669979    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:23.669986    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:23.669992    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:23.669998    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:23.670005    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:23.670011    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:23.670023    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:23.670035    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:23.670044    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:23.670052    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:23.670067    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:25.670415    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 11
	I0916 04:05:25.670428    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:25.670521    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:25.671276    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:25.671321    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:25.671330    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:25.671337    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:25.671344    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:25.671357    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:25.671380    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:25.671390    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:25.671398    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:25.671419    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:25.671426    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:25.671441    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:25.671452    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:25.671460    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:25.671467    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:25.671476    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:25.671489    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:27.673403    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 12
	I0916 04:05:27.673415    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:27.673460    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:27.674240    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:27.674296    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:27.674308    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:27.674318    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:27.674324    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:27.674331    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:27.674337    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:27.674344    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:27.674357    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:27.674364    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:27.674370    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:27.674387    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:27.674398    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:27.674406    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:27.674413    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:27.674420    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:27.674431    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:29.676479    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 13
	I0916 04:05:29.676507    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:29.676552    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:29.677505    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:29.677568    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:29.677576    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:29.677584    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:29.677590    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:29.677604    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:29.677613    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:29.677622    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:29.677637    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:29.677664    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:29.677674    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:29.677686    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:29.677693    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:29.677699    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:29.677707    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:29.677714    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:29.677719    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:31.679783    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 14
	I0916 04:05:31.679795    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:31.679838    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:31.680606    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:31.680661    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:31.680676    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:31.680689    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:31.680708    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:31.680721    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:31.680727    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:31.680736    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:31.680747    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:31.680755    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:31.680762    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:31.680767    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:31.680774    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:31.680781    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:31.680795    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:31.680808    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:31.680817    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:33.682830    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 15
	I0916 04:05:33.682845    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:33.682898    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:33.683696    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:33.683741    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:33.683752    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:33.683760    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:33.683767    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:33.683773    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:33.683779    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:33.683785    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:33.683791    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:33.683806    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:33.683815    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:33.683823    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:33.683831    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:33.683840    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:33.683847    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:33.683864    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:33.683876    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:35.684582    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 16
	I0916 04:05:35.684595    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:35.684730    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:35.685571    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:35.685626    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:35.685636    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:35.685644    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:35.685652    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:35.685667    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:35.685686    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:35.685697    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:35.685708    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:35.685724    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:35.685736    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:35.685744    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:35.685749    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:35.685760    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:35.685769    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:35.685786    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:35.685797    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:37.685847    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 17
	I0916 04:05:37.685861    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:37.685924    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:37.686758    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:37.686807    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:37.686819    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:37.686829    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:37.686836    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:37.686867    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:37.686884    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:37.686891    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:37.686897    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:37.686903    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:37.686911    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:37.686919    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:37.686927    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:37.686934    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:37.686941    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:37.686951    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:37.686959    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:39.688030    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 18
	I0916 04:05:39.688045    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:39.688120    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:39.688926    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:39.688982    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:39.688993    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:39.689006    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:39.689013    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:39.689019    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:39.689025    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:39.689032    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:39.689039    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:39.689061    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:39.689092    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:39.689111    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:39.689121    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:39.689128    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:39.689136    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:39.689154    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:39.689163    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:41.691106    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 19
	I0916 04:05:41.691123    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:41.691184    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:41.692022    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:41.692067    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:41.692083    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:41.692094    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:41.692100    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:41.692106    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:41.692119    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:41.692127    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:41.692136    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:41.692143    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:41.692150    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:41.692156    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:41.692163    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:41.692170    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:41.692177    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:41.692185    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:41.692193    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:43.694007    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 20
	I0916 04:05:43.694020    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:43.694061    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:43.694832    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:43.694881    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:43.694890    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:43.694901    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:43.694911    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:43.694918    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:43.694924    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:43.694940    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:43.694952    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:43.694972    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:43.694980    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:43.694986    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:43.694994    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:43.695000    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:43.695020    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:43.695026    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:43.695034    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:45.695840    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 21
	I0916 04:05:45.695856    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:45.695912    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:45.696767    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:45.696803    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:45.696812    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:45.696831    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:45.696836    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:45.696843    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:45.696849    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:45.696855    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:45.696863    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:45.696870    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:45.696875    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:45.696891    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:45.696902    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:45.696909    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:45.696917    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:45.696929    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:45.696937    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:47.699002    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 22
	I0916 04:05:47.699015    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:47.699082    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:47.699898    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:47.699906    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:47.699915    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:47.699921    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:47.699927    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:47.699932    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:47.699938    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:47.699945    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:47.699951    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:47.699957    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:47.699965    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:47.699971    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:47.699977    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:47.699983    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:47.699989    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:47.699997    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:47.700004    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:49.701199    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 23
	I0916 04:05:49.701212    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:49.701261    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:49.702073    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:49.702125    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:49.702133    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:49.702150    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:49.702159    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:49.702179    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:49.702187    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:49.702194    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:49.702200    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:49.702207    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:49.702236    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:49.702248    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:49.702260    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:49.702268    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:49.702275    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:49.702282    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:49.702305    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:51.703276    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 24
	I0916 04:05:51.703291    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:51.703402    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:51.704204    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:51.704253    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:51.704260    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:51.704281    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:51.704290    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:51.704298    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:51.704304    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:51.704319    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:51.704331    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:51.704356    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:51.704369    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:51.704378    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:51.704386    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:51.704393    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:51.704401    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:51.704408    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:51.704414    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:53.706468    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 25
	I0916 04:05:53.706482    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:53.706547    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:53.707345    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:53.707391    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:53.707399    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:53.707410    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:53.707417    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:53.707423    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:53.707429    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:53.707435    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:53.707441    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:53.707460    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:53.707476    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:53.707493    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:53.707506    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:53.707519    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:53.707529    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:53.707536    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:53.707544    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:55.707939    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 26
	I0916 04:05:55.707950    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:55.708017    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:55.708803    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:55.708851    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:55.708865    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:55.708879    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:55.708891    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:55.708900    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:55.708907    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:55.708913    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:55.708921    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:55.708927    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:55.708932    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:55.708951    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:55.708963    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:55.708970    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:55.708980    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:55.708987    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:55.708994    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:57.711080    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 27
	I0916 04:05:57.711105    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:57.711116    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:57.711930    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:57.711971    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:57.711987    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:57.712008    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:57.712020    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:57.712026    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:57.712034    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:57.712040    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:57.712048    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:57.712055    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:57.712062    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:57.712068    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:57.712082    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:57.712105    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:57.712118    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:57.712138    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:57.712150    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:59.714185    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 28
	I0916 04:05:59.714197    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:59.714256    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:05:59.715100    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:05:59.715148    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:05:59.715160    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:05:59.715178    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:05:59.715187    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:05:59.715194    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:05:59.715200    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:05:59.715212    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:05:59.715230    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:05:59.715237    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:05:59.715243    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:05:59.715249    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:05:59.715256    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:05:59.715262    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:05:59.715269    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:05:59.715276    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:05:59.715285    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:01.717337    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Attempt 29
	I0916 04:06:01.717349    5339 main.go:141] libmachine: (offline-docker-384000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:01.717454    5339 main.go:141] libmachine: (offline-docker-384000) DBG | hyperkit pid from json: 5579
	I0916 04:06:01.718219    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Searching for 76:49:54:7c:db:f1 in /var/db/dhcpd_leases ...
	I0916 04:06:01.718269    5339 main.go:141] libmachine: (offline-docker-384000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:01.718283    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:01.718291    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:01.718301    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:01.718308    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:01.718315    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:01.718321    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:01.718329    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:01.718335    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:01.718342    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:01.718349    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:01.718357    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:01.718369    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:01.718379    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:01.718396    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:01.718408    5339 main.go:141] libmachine: (offline-docker-384000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:03.720580    5339 client.go:171] duration metric: took 1m0.775079078s to LocalClient.Create
	I0916 04:06:05.722681    5339 start.go:128] duration metric: took 1m2.80854943s to createHost
	I0916 04:06:05.722717    5339 start.go:83] releasing machines lock for "offline-docker-384000", held for 1m2.808638928s
	W0916 04:06:05.722811    5339 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-384000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:49:54:7c:db:f1
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-384000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:49:54:7c:db:f1
	I0916 04:06:05.788585    5339 out.go:201] 
	W0916 04:06:05.809603    5339 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:49:54:7c:db:f1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:49:54:7c:db:f1
	W0916 04:06:05.809621    5339 out.go:270] * 
	* 
	W0916 04:06:05.810295    5339 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:06:05.872565    5339 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-384000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-16 04:06:05.973258 -0700 PDT m=+2774.638644990
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-384000 -n offline-docker-384000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-384000 -n offline-docker-384000: exit status 7 (87.708745ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:06:06.058885    5624 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:06:06.058903    5624 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-384000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-384000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-384000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-384000: (5.251835339s)
--- FAIL: TestOffline (195.30s)

                                                
                                    
x
+
TestAddons/Setup (77.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-981000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p addons-981000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 90 (1m17.036918968s)

                                                
                                                
-- stdout --
	* [addons-981000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "addons-981000" primary control-plane node in "addons-981000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:20:17.018647    1730 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:20:17.018822    1730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:17.018828    1730 out.go:358] Setting ErrFile to fd 2...
	I0916 03:20:17.018832    1730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:17.019012    1730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:20:17.020482    1730 out.go:352] Setting JSON to false
	I0916 03:20:17.042911    1730 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1188,"bootTime":1726480829,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:20:17.043056    1730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:20:17.066934    1730 out.go:177] * [addons-981000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:20:17.108859    1730 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:20:17.108935    1730 notify.go:220] Checking for updates...
	I0916 03:20:17.152393    1730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:20:17.173732    1730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:20:17.194629    1730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:20:17.215554    1730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:20:17.236756    1730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:20:17.258017    1730 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:20:17.287496    1730 out.go:177] * Using the hyperkit driver based on user configuration
	I0916 03:20:17.329831    1730 start.go:297] selected driver: hyperkit
	I0916 03:20:17.329859    1730 start.go:901] validating driver "hyperkit" against <nil>
	I0916 03:20:17.329875    1730 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:20:17.334234    1730 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:20:17.334354    1730 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:20:17.342627    1730 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:20:17.346581    1730 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:20:17.346601    1730 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:20:17.346631    1730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:20:17.346845    1730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:20:17.346882    1730 cni.go:84] Creating CNI manager for ""
	I0916 03:20:17.346918    1730 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:20:17.346924    1730 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 03:20:17.346984    1730 start.go:340] cluster config:
	{Name:addons-981000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:20:17.347066    1730 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:20:17.389742    1730 out.go:177] * Starting "addons-981000" primary control-plane node in "addons-981000" cluster
	I0916 03:20:17.410743    1730 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:20:17.410812    1730 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:20:17.410835    1730 cache.go:56] Caching tarball of preloaded images
	I0916 03:20:17.411032    1730 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 03:20:17.411051    1730 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:20:17.411508    1730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/addons-981000/config.json ...
	I0916 03:20:17.411548    1730 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/addons-981000/config.json: {Name:mkef398cb3cae4f179b08ae2b9151182c4304155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:20:17.412145    1730 start.go:360] acquireMachinesLock for addons-981000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:20:17.412349    1730 start.go:364] duration metric: took 183.85µs to acquireMachinesLock for "addons-981000"
	I0916 03:20:17.412396    1730 start.go:93] Provisioning new machine with config: &{Name:addons-981000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:addons-981000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:20:17.412467    1730 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 03:20:17.454789    1730 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 03:20:17.455069    1730 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:20:17.455137    1730 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:20:17.464258    1730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49649
	I0916 03:20:17.464617    1730 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:20:17.465012    1730 main.go:141] libmachine: Using API Version  1
	I0916 03:20:17.465021    1730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:20:17.465222    1730 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:20:17.465362    1730 main.go:141] libmachine: (addons-981000) Calling .GetMachineName
	I0916 03:20:17.465475    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:17.465591    1730 start.go:159] libmachine.API.Create for "addons-981000" (driver="hyperkit")
	I0916 03:20:17.465614    1730 client.go:168] LocalClient.Create starting
	I0916 03:20:17.465651    1730 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 03:20:17.540859    1730 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 03:20:17.575154    1730 main.go:141] libmachine: Running pre-create checks...
	I0916 03:20:17.575171    1730 main.go:141] libmachine: (addons-981000) Calling .PreCreateCheck
	I0916 03:20:17.575307    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:17.575470    1730 main.go:141] libmachine: (addons-981000) Calling .GetConfigRaw
	I0916 03:20:17.575892    1730 main.go:141] libmachine: Creating machine...
	I0916 03:20:17.575911    1730 main.go:141] libmachine: (addons-981000) Calling .Create
	I0916 03:20:17.576000    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:17.576121    1730 main.go:141] libmachine: (addons-981000) DBG | I0916 03:20:17.576000    1738 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:20:17.576241    1730 main.go:141] libmachine: (addons-981000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 03:20:17.845329    1730 main.go:141] libmachine: (addons-981000) DBG | I0916 03:20:17.845233    1738 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/id_rsa...
	I0916 03:20:18.004990    1730 main.go:141] libmachine: (addons-981000) DBG | I0916 03:20:18.004890    1738 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/addons-981000.rawdisk...
	I0916 03:20:18.005007    1730 main.go:141] libmachine: (addons-981000) DBG | Writing magic tar header
	I0916 03:20:18.005017    1730 main.go:141] libmachine: (addons-981000) DBG | Writing SSH key tar header
	I0916 03:20:18.005294    1730 main.go:141] libmachine: (addons-981000) DBG | I0916 03:20:18.005245    1738 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000 ...
	I0916 03:20:18.506125    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:18.506144    1730 main.go:141] libmachine: (addons-981000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/hyperkit.pid
	I0916 03:20:18.506192    1730 main.go:141] libmachine: (addons-981000) DBG | Using UUID 0fd1c7da-5fc4-417b-9106-a3601ede2c77
	I0916 03:20:18.732713    1730 main.go:141] libmachine: (addons-981000) DBG | Generated MAC c2:a3:9:c2:ab:37
	I0916 03:20:18.732740    1730 main.go:141] libmachine: (addons-981000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-981000
	I0916 03:20:18.732777    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0fd1c7da-5fc4-417b-9106-a3601ede2c77", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:20:18.732808    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0fd1c7da-5fc4-417b-9106-a3601ede2c77", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:20:18.732849    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0fd1c7da-5fc4-417b-9106-a3601ede2c77", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/addons-981000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machine
s/addons-981000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-981000"}
	I0916 03:20:18.732882    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0fd1c7da-5fc4-417b-9106-a3601ede2c77 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/addons-981000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-981000"
	I0916 03:20:18.732895    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 03:20:18.735867    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 DEBUG: hyperkit: Pid is 1744
	I0916 03:20:18.736254    1730 main.go:141] libmachine: (addons-981000) DBG | Attempt 0
	I0916 03:20:18.736264    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:18.736313    1730 main.go:141] libmachine: (addons-981000) DBG | hyperkit pid from json: 1744
	I0916 03:20:18.737142    1730 main.go:141] libmachine: (addons-981000) DBG | Searching for c2:a3:9:c2:ab:37 in /var/db/dhcpd_leases ...
	I0916 03:20:18.753286    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 03:20:18.808985    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 03:20:18.809671    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:20:18.809687    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:20:18.809705    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:20:18.809722    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:20:19.340704    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 03:20:19.340719    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 03:20:19.456698    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:20:19.456718    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:20:19.456728    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:20:19.456736    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:20:19.457586    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 03:20:19.457597    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 03:20:20.737314    1730 main.go:141] libmachine: (addons-981000) DBG | Attempt 1
	I0916 03:20:20.737335    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:20.737430    1730 main.go:141] libmachine: (addons-981000) DBG | hyperkit pid from json: 1744
	I0916 03:20:20.738230    1730 main.go:141] libmachine: (addons-981000) DBG | Searching for c2:a3:9:c2:ab:37 in /var/db/dhcpd_leases ...
	I0916 03:20:22.739321    1730 main.go:141] libmachine: (addons-981000) DBG | Attempt 2
	I0916 03:20:22.739336    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:22.739399    1730 main.go:141] libmachine: (addons-981000) DBG | hyperkit pid from json: 1744
	I0916 03:20:22.740251    1730 main.go:141] libmachine: (addons-981000) DBG | Searching for c2:a3:9:c2:ab:37 in /var/db/dhcpd_leases ...
	I0916 03:20:24.741212    1730 main.go:141] libmachine: (addons-981000) DBG | Attempt 3
	I0916 03:20:24.741227    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:24.741326    1730 main.go:141] libmachine: (addons-981000) DBG | hyperkit pid from json: 1744
	I0916 03:20:24.742109    1730 main.go:141] libmachine: (addons-981000) DBG | Searching for c2:a3:9:c2:ab:37 in /var/db/dhcpd_leases ...
	I0916 03:20:25.052782    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 03:20:25.052851    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 03:20:25.052860    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 03:20:25.073207    1730 main.go:141] libmachine: (addons-981000) DBG | 2024/09/16 03:20:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 03:20:26.743177    1730 main.go:141] libmachine: (addons-981000) DBG | Attempt 4
	I0916 03:20:26.743193    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:26.743258    1730 main.go:141] libmachine: (addons-981000) DBG | hyperkit pid from json: 1744
	I0916 03:20:26.744016    1730 main.go:141] libmachine: (addons-981000) DBG | Searching for c2:a3:9:c2:ab:37 in /var/db/dhcpd_leases ...
	I0916 03:20:28.744195    1730 main.go:141] libmachine: (addons-981000) DBG | Attempt 5
	I0916 03:20:28.744212    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:28.744347    1730 main.go:141] libmachine: (addons-981000) DBG | hyperkit pid from json: 1744
	I0916 03:20:28.745411    1730 main.go:141] libmachine: (addons-981000) DBG | Searching for c2:a3:9:c2:ab:37 in /var/db/dhcpd_leases ...
	I0916 03:20:28.745482    1730 main.go:141] libmachine: (addons-981000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I0916 03:20:28.745499    1730 main.go:141] libmachine: (addons-981000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:20:28.745509    1730 main.go:141] libmachine: (addons-981000) DBG | Found match: c2:a3:9:c2:ab:37
	I0916 03:20:28.745516    1730 main.go:141] libmachine: (addons-981000) DBG | IP: 192.169.0.2
	I0916 03:20:28.745580    1730 main.go:141] libmachine: (addons-981000) Calling .GetConfigRaw
	I0916 03:20:28.746323    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:28.746430    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:28.746531    1730 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 03:20:28.746545    1730 main.go:141] libmachine: (addons-981000) Calling .GetState
	I0916 03:20:28.746631    1730 main.go:141] libmachine: (addons-981000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:20:28.746691    1730 main.go:141] libmachine: (addons-981000) DBG | hyperkit pid from json: 1744
	I0916 03:20:28.747581    1730 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 03:20:28.747590    1730 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 03:20:28.747595    1730 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 03:20:28.747600    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:28.747738    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:28.747833    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:28.747924    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:28.748016    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:28.748732    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:28.748901    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:28.748913    1730 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 03:20:29.814767    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:20:29.814786    1730 main.go:141] libmachine: Detecting the provisioner...
	I0916 03:20:29.814792    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:29.814925    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:29.815025    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:29.815131    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:29.815231    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:29.815389    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:29.815531    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:29.815539    1730 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 03:20:29.876301    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 03:20:29.876338    1730 main.go:141] libmachine: found compatible host: buildroot
	I0916 03:20:29.876344    1730 main.go:141] libmachine: Provisioning with buildroot...
	I0916 03:20:29.876348    1730 main.go:141] libmachine: (addons-981000) Calling .GetMachineName
	I0916 03:20:29.876487    1730 buildroot.go:166] provisioning hostname "addons-981000"
	I0916 03:20:29.876499    1730 main.go:141] libmachine: (addons-981000) Calling .GetMachineName
	I0916 03:20:29.876591    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:29.876696    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:29.876793    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:29.876906    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:29.877004    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:29.877140    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:29.877285    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:29.877294    1730 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-981000 && echo "addons-981000" | sudo tee /etc/hostname
	I0916 03:20:29.947347    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-981000
	
	I0916 03:20:29.947366    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:29.947495    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:29.947582    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:29.947670    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:29.947771    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:29.947896    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:29.948031    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:29.948043    1730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-981000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-981000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-981000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:20:30.015961    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:20:30.015983    1730 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 03:20:30.016001    1730 buildroot.go:174] setting up certificates
	I0916 03:20:30.016014    1730 provision.go:84] configureAuth start
	I0916 03:20:30.016021    1730 main.go:141] libmachine: (addons-981000) Calling .GetMachineName
	I0916 03:20:30.016153    1730 main.go:141] libmachine: (addons-981000) Calling .GetIP
	I0916 03:20:30.016235    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:30.016327    1730 provision.go:143] copyHostCerts
	I0916 03:20:30.016435    1730 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 03:20:30.016722    1730 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 03:20:30.016905    1730 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 03:20:30.017051    1730 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.addons-981000 san=[127.0.0.1 192.169.0.2 addons-981000 localhost minikube]
	I0916 03:20:30.154395    1730 provision.go:177] copyRemoteCerts
	I0916 03:20:30.154459    1730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:20:30.154474    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:30.154609    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:30.154695    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:30.154777    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:30.154870    1730 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/id_rsa Username:docker}
	I0916 03:20:30.191718    1730 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 03:20:30.211623    1730 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 03:20:30.231655    1730 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 03:20:30.251262    1730 provision.go:87] duration metric: took 235.237009ms to configureAuth
	I0916 03:20:30.251275    1730 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:20:30.251415    1730 config.go:182] Loaded profile config "addons-981000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:20:30.251429    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:30.251566    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:30.251659    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:30.251735    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:30.251837    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:30.251915    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:30.252030    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:30.252152    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:30.252159    1730 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:20:30.313536    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:20:30.313549    1730 buildroot.go:70] root file system type: tmpfs
	I0916 03:20:30.313621    1730 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:20:30.313633    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:30.313763    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:30.313848    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:30.313933    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:30.314009    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:30.314144    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:30.314288    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:30.314331    1730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:20:30.386120    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:20:30.386151    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:30.386275    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:30.386366    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:30.386451    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:30.386539    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:30.386672    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:30.386813    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:30.386828    1730 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:20:31.921832    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:20:31.921846    1730 main.go:141] libmachine: Checking connection to Docker...
	I0916 03:20:31.921853    1730 main.go:141] libmachine: (addons-981000) Calling .GetURL
	I0916 03:20:31.921983    1730 main.go:141] libmachine: Docker is up and running!
	I0916 03:20:31.921991    1730 main.go:141] libmachine: Reticulating splines...
	I0916 03:20:31.922000    1730 client.go:171] duration metric: took 14.456608838s to LocalClient.Create
	I0916 03:20:31.922013    1730 start.go:167] duration metric: took 14.456654305s to libmachine.API.Create "addons-981000"
	I0916 03:20:31.922021    1730 start.go:293] postStartSetup for "addons-981000" (driver="hyperkit")
	I0916 03:20:31.922028    1730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:20:31.922039    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:31.922199    1730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:20:31.922214    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:31.922300    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:31.922412    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:31.922519    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:31.922616    1730 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/id_rsa Username:docker}
	I0916 03:20:31.963964    1730 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:20:31.967230    1730 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:20:31.967248    1730 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 03:20:31.967346    1730 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 03:20:31.967395    1730 start.go:296] duration metric: took 45.370515ms for postStartSetup
	I0916 03:20:31.967416    1730 main.go:141] libmachine: (addons-981000) Calling .GetConfigRaw
	I0916 03:20:31.968029    1730 main.go:141] libmachine: (addons-981000) Calling .GetIP
	I0916 03:20:31.968185    1730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/addons-981000/config.json ...
	I0916 03:20:31.968510    1730 start.go:128] duration metric: took 14.556263649s to createHost
	I0916 03:20:31.968523    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:31.968616    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:31.968704    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:31.968790    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:31.968874    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:31.968999    1730 main.go:141] libmachine: Using SSH client type: native
	I0916 03:20:31.969116    1730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe110820] 0xe113500 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0916 03:20:31.969124    1730 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:20:32.038756    1730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482032.164925546
	
	I0916 03:20:32.038767    1730 fix.go:216] guest clock: 1726482032.164925546
	I0916 03:20:32.038779    1730 fix.go:229] Guest: 2024-09-16 03:20:32.164925546 -0700 PDT Remote: 2024-09-16 03:20:31.968517 -0700 PDT m=+14.984858739 (delta=196.408546ms)
	I0916 03:20:32.038796    1730 fix.go:200] guest clock delta is within tolerance: 196.408546ms
	I0916 03:20:32.038799    1730 start.go:83] releasing machines lock for "addons-981000", held for 14.626673824s
	I0916 03:20:32.038816    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:32.038957    1730 main.go:141] libmachine: (addons-981000) Calling .GetIP
	I0916 03:20:32.039049    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:32.039342    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:32.039442    1730 main.go:141] libmachine: (addons-981000) Calling .DriverName
	I0916 03:20:32.039518    1730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:20:32.039542    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:32.039562    1730 ssh_runner.go:195] Run: cat /version.json
	I0916 03:20:32.039571    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHHostname
	I0916 03:20:32.039641    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:32.039653    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHPort
	I0916 03:20:32.039741    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:32.039749    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHKeyPath
	I0916 03:20:32.039825    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:32.039842    1730 main.go:141] libmachine: (addons-981000) Calling .GetSSHUsername
	I0916 03:20:32.039904    1730 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/id_rsa Username:docker}
	I0916 03:20:32.039918    1730 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/addons-981000/id_rsa Username:docker}
	I0916 03:20:32.077797    1730 ssh_runner.go:195] Run: systemctl --version
	I0916 03:20:32.122498    1730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:20:32.127130    1730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:20:32.127180    1730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:20:32.139628    1730 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:20:32.139643    1730 start.go:495] detecting cgroup driver to use...
	I0916 03:20:32.139749    1730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:20:32.154456    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:20:32.162682    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:20:32.170709    1730 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:20:32.170766    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:20:32.178967    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:20:32.187062    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:20:32.195125    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:20:32.203420    1730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:20:32.211848    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:20:32.220031    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:20:32.228300    1730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:20:32.236690    1730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:20:32.243909    1730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:20:32.251193    1730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:32.346175    1730 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:20:32.365811    1730 start.go:495] detecting cgroup driver to use...
	I0916 03:20:32.365898    1730 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:20:32.381564    1730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:20:32.399105    1730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:20:32.416800    1730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:20:32.428195    1730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:20:32.439129    1730 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:20:32.459927    1730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:20:32.470275    1730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:20:32.485329    1730 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:20:32.488285    1730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:20:32.495376    1730 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:20:32.509493    1730 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:20:32.608931    1730 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:20:32.706059    1730 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:20:32.706133    1730 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:20:32.721969    1730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:20:32.817795    1730 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:21:33.836561    1730 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019726582s)
	I0916 03:21:33.836639    1730 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0916 03:21:33.872104    1730 out.go:201] 
	W0916 03:21:33.893761    1730 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 16 10:20:30 addons-981000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:20:30 addons-981000 dockerd[513]: time="2024-09-16T10:20:30.808959791Z" level=info msg="Starting up"
	Sep 16 10:20:30 addons-981000 dockerd[513]: time="2024-09-16T10:20:30.809417957Z" level=info msg="containerd not running, starting managed containerd"
	Sep 16 10:20:30 addons-981000 dockerd[513]: time="2024-09-16T10:20:30.809978532Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=520
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.825447205Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.841860226Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.841932445Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.841997995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842039624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842119414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842155486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842312936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842353972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842415402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842458055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842537825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842710975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844259461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844358588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844495709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844541435Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844630822Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844695038Z" level=info msg="metadata content store policy set" policy=shared
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847031053Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847119345Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847166901Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847201322Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847233583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847325371Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847539642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847642707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847680623Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847753841Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847796778Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847834958Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847867362Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847898179Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847937570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847972109Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848002167Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848032763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848069070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848100964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848131030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848163944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848194178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848224617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848253569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848282452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848312577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848347326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848377727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848406716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848436219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848513471Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848562601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848595799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848626121Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848699717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848743896Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848775157Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848804786Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848832952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848861859Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848889971Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849076904Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849163115Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849224343Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849306037Z" level=info msg="containerd successfully booted in 0.024434s"
	Sep 16 10:20:31 addons-981000 dockerd[513]: time="2024-09-16T10:20:31.830311374Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 16 10:20:31 addons-981000 dockerd[513]: time="2024-09-16T10:20:31.841331600Z" level=info msg="Loading containers: start."
	Sep 16 10:20:31 addons-981000 dockerd[513]: time="2024-09-16T10:20:31.930391123Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.011477128Z" level=info msg="Loading containers: done."
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.018837937Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.019018812Z" level=info msg="Daemon has completed initialization"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.045095017Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 16 10:20:32 addons-981000 systemd[1]: Started Docker Application Container Engine.
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.047407793Z" level=info msg="API listen on [::]:2376"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.956890941Z" level=info msg="Processing signal 'terminated'"
	Sep 16 10:20:32 addons-981000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958115538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958564746Z" level=info msg="Daemon shutdown complete"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958658791Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958675426Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 16 10:20:33 addons-981000 systemd[1]: docker.service: Deactivated successfully.
	Sep 16 10:20:33 addons-981000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 16 10:20:33 addons-981000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:20:33 addons-981000 dockerd[913]: time="2024-09-16T10:20:33.988892007Z" level=info msg="Starting up"
	Sep 16 10:21:34 addons-981000 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 16 10:21:34 addons-981000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 16 10:21:34 addons-981000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 16 10:21:34 addons-981000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 16 10:20:30 addons-981000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:20:30 addons-981000 dockerd[513]: time="2024-09-16T10:20:30.808959791Z" level=info msg="Starting up"
	Sep 16 10:20:30 addons-981000 dockerd[513]: time="2024-09-16T10:20:30.809417957Z" level=info msg="containerd not running, starting managed containerd"
	Sep 16 10:20:30 addons-981000 dockerd[513]: time="2024-09-16T10:20:30.809978532Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=520
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.825447205Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.841860226Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.841932445Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.841997995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842039624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842119414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842155486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842312936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842353972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842415402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842458055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842537825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.842710975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844259461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844358588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844495709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844541435Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844630822Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.844695038Z" level=info msg="metadata content store policy set" policy=shared
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847031053Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847119345Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847166901Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847201322Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847233583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847325371Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847539642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847642707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847680623Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847753841Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847796778Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847834958Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847867362Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847898179Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847937570Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.847972109Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848002167Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848032763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848069070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848100964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848131030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848163944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848194178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848224617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848253569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848282452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848312577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848347326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848377727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848406716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848436219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848513471Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848562601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848595799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848626121Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848699717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848743896Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848775157Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848804786Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848832952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848861859Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.848889971Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849076904Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849163115Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849224343Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 16 10:20:30 addons-981000 dockerd[520]: time="2024-09-16T10:20:30.849306037Z" level=info msg="containerd successfully booted in 0.024434s"
	Sep 16 10:20:31 addons-981000 dockerd[513]: time="2024-09-16T10:20:31.830311374Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 16 10:20:31 addons-981000 dockerd[513]: time="2024-09-16T10:20:31.841331600Z" level=info msg="Loading containers: start."
	Sep 16 10:20:31 addons-981000 dockerd[513]: time="2024-09-16T10:20:31.930391123Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.011477128Z" level=info msg="Loading containers: done."
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.018837937Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.019018812Z" level=info msg="Daemon has completed initialization"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.045095017Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 16 10:20:32 addons-981000 systemd[1]: Started Docker Application Container Engine.
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.047407793Z" level=info msg="API listen on [::]:2376"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.956890941Z" level=info msg="Processing signal 'terminated'"
	Sep 16 10:20:32 addons-981000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958115538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958564746Z" level=info msg="Daemon shutdown complete"
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958658791Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 16 10:20:32 addons-981000 dockerd[513]: time="2024-09-16T10:20:32.958675426Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 16 10:20:33 addons-981000 systemd[1]: docker.service: Deactivated successfully.
	Sep 16 10:20:33 addons-981000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 16 10:20:33 addons-981000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:20:33 addons-981000 dockerd[913]: time="2024-09-16T10:20:33.988892007Z" level=info msg="Starting up"
	Sep 16 10:21:34 addons-981000 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 16 10:21:34 addons-981000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 16 10:21:34 addons-981000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 16 10:21:34 addons-981000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0916 03:21:33.893843    1730 out.go:270] * 
	* 
	W0916 03:21:33.895016    1730 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:21:33.977680    1730 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-amd64 start -p addons-981000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 90
--- FAIL: TestAddons/Setup (77.05s)

                                                
                                    
x
+
TestCertOptions (251.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-262000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0916 04:12:39.733477    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:12:58.550909    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:13:07.445271    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-262000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.101436687s)

                                                
                                                
-- stdout --
	* [cert-options-262000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-262000" primary control-plane node in "cert-options-262000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-262000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:a2:4f:4e:95:b3
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-262000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:c7:af:75:7c:e3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:c7:af:75:7c:e3
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-262000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-262000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-262000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (162.577873ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-262000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-262000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-262000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-262000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-262000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (162.011394ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-262000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-262000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-262000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-16 04:15:33.179124 -0700 PDT m=+3341.836582107
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-262000 -n cert-options-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-262000 -n cert-options-262000: exit status 7 (78.514875ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:15:33.255962    6184 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:15:33.255984    6184 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-262000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-262000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-262000: (5.238734511s)
--- FAIL: TestCertOptions (251.78s)

                                                
                                    
x
+
TestCertExpiration (1710.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-820000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0916 04:10:23.600788    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-820000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.497361467s)

                                                
                                                
-- stdout --
	* [cert-expiration-820000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-820000" primary control-plane node in "cert-expiration-820000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-820000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:a:19:8a:b1:3b
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-820000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:c7:60:9d:d3:56
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:c7:60:9d:d3:56
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-820000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-820000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0916 04:17:39.736039    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:17:58.554063    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-820000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m18.861260948s)

                                                
                                                
-- stdout --
	* [cert-expiration-820000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-820000" primary control-plane node in "cert-expiration-820000" cluster
	* Updating the running hyperkit "cert-expiration-820000" VM ...
	* Updating the running hyperkit "cert-expiration-820000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-820000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-820000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-820000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-820000" primary control-plane node in "cert-expiration-820000" cluster
	* Updating the running hyperkit "cert-expiration-820000" VM ...
	* Updating the running hyperkit "cert-expiration-820000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-820000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-16 04:38:48.781439 -0700 PDT m=+4737.426666889
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-820000 -n cert-expiration-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-820000 -n cert-expiration-820000: exit status 7 (78.94542ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:38:48.858486    7868 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:38:48.858506    7868 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-820000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-820000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-820000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-820000: (5.257061107s)
--- FAIL: TestCertExpiration (1710.70s)

                                                
                                    
x
+
TestDockerFlags (252.27s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-991000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0916 04:07:39.729179    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:39.736459    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:39.748406    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:39.769999    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:39.813060    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:39.895008    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:40.058028    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:40.379476    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:41.020894    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:42.304323    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:44.866251    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:49.988221    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:07:58.547087    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:08:00.231523    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:08:20.713274    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:09:01.677263    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-991000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.540229037s)

                                                
                                                
-- stdout --
	* [docker-flags-991000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-991000" primary control-plane node in "docker-flags-991000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-991000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:07:14.492166    5706 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:07:14.492416    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:07:14.492421    5706 out.go:358] Setting ErrFile to fd 2...
	I0916 04:07:14.492425    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:07:14.492588    5706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 04:07:14.494104    5706 out.go:352] Setting JSON to false
	I0916 04:07:14.516580    5706 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4004,"bootTime":1726480830,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 04:07:14.516727    5706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:07:14.538386    5706 out.go:177] * [docker-flags-991000] minikube v1.34.0 on Darwin 14.6.1
	I0916 04:07:14.580986    5706 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:07:14.581005    5706 notify.go:220] Checking for updates...
	I0916 04:07:14.622879    5706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 04:07:14.643886    5706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 04:07:14.685959    5706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:07:14.727953    5706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:07:14.748947    5706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:07:14.770422    5706 config.go:182] Loaded profile config "force-systemd-flag-218000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:07:14.770516    5706 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:07:14.799018    5706 out.go:177] * Using the hyperkit driver based on user configuration
	I0916 04:07:14.840903    5706 start.go:297] selected driver: hyperkit
	I0916 04:07:14.840916    5706 start.go:901] validating driver "hyperkit" against <nil>
	I0916 04:07:14.840926    5706 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:07:14.843884    5706 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:07:14.844013    5706 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 04:07:14.852516    5706 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 04:07:14.856447    5706 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:07:14.856466    5706 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 04:07:14.856500    5706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:07:14.856755    5706 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0916 04:07:14.856791    5706 cni.go:84] Creating CNI manager for ""
	I0916 04:07:14.856827    5706 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:07:14.856833    5706 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:07:14.856898    5706 start.go:340] cluster config:
	{Name:docker-flags-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:07:14.856998    5706 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:07:14.898956    5706 out.go:177] * Starting "docker-flags-991000" primary control-plane node in "docker-flags-991000" cluster
	I0916 04:07:14.919914    5706 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:07:14.919954    5706 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 04:07:14.919972    5706 cache.go:56] Caching tarball of preloaded images
	I0916 04:07:14.920093    5706 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 04:07:14.920103    5706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:07:14.920182    5706 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/docker-flags-991000/config.json ...
	I0916 04:07:14.920202    5706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/docker-flags-991000/config.json: {Name:mk5d64d43ff84a53783641fb95432b662416584e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:07:14.920569    5706 start.go:360] acquireMachinesLock for docker-flags-991000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:08:11.819950    5706 start.go:364] duration metric: took 56.898553962s to acquireMachinesLock for "docker-flags-991000"
	I0916 04:08:11.820000    5706 start.go:93] Provisioning new machine with config: &{Name:docker-flags-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:08:11.820069    5706 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:08:11.841568    5706 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:08:11.841753    5706 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:08:11.841800    5706 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:08:11.850257    5706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52720
	I0916 04:08:11.850593    5706 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:08:11.850991    5706 main.go:141] libmachine: Using API Version  1
	I0916 04:08:11.851001    5706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:08:11.851258    5706 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:08:11.851377    5706 main.go:141] libmachine: (docker-flags-991000) Calling .GetMachineName
	I0916 04:08:11.851478    5706 main.go:141] libmachine: (docker-flags-991000) Calling .DriverName
	I0916 04:08:11.851594    5706 start.go:159] libmachine.API.Create for "docker-flags-991000" (driver="hyperkit")
	I0916 04:08:11.851617    5706 client.go:168] LocalClient.Create starting
	I0916 04:08:11.851653    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:08:11.851707    5706 main.go:141] libmachine: Decoding PEM data...
	I0916 04:08:11.851723    5706 main.go:141] libmachine: Parsing certificate...
	I0916 04:08:11.851789    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:08:11.851827    5706 main.go:141] libmachine: Decoding PEM data...
	I0916 04:08:11.851839    5706 main.go:141] libmachine: Parsing certificate...
	I0916 04:08:11.851851    5706 main.go:141] libmachine: Running pre-create checks...
	I0916 04:08:11.851859    5706 main.go:141] libmachine: (docker-flags-991000) Calling .PreCreateCheck
	I0916 04:08:11.851935    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:11.852092    5706 main.go:141] libmachine: (docker-flags-991000) Calling .GetConfigRaw
	I0916 04:08:11.904350    5706 main.go:141] libmachine: Creating machine...
	I0916 04:08:11.904361    5706 main.go:141] libmachine: (docker-flags-991000) Calling .Create
	I0916 04:08:11.904478    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:11.904597    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:08:11.904461    5776 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:08:11.904651    5706 main.go:141] libmachine: (docker-flags-991000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:08:12.108990    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:08:12.108892    5776 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/id_rsa...
	I0916 04:08:12.255605    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:08:12.255527    5776 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/docker-flags-991000.rawdisk...
	I0916 04:08:12.255616    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Writing magic tar header
	I0916 04:08:12.255629    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Writing SSH key tar header
	I0916 04:08:12.256195    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:08:12.256152    5776 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000 ...
	I0916 04:08:12.635781    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:12.635802    5706 main.go:141] libmachine: (docker-flags-991000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/hyperkit.pid
	I0916 04:08:12.635854    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Using UUID 547a9754-c90a-49b1-bc9c-f088fc72bf18
	I0916 04:08:12.660997    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Generated MAC 86:d8:21:2c:aa:6e
	I0916 04:08:12.661015    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-991000
	I0916 04:08:12.661048    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"547a9754-c90a-49b1-bc9c-f088fc72bf18", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0916 04:08:12.661076    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"547a9754-c90a-49b1-bc9c-f088fc72bf18", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0916 04:08:12.661139    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "547a9754-c90a-49b1-bc9c-f088fc72bf18", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/docker-flags-991000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage,/Users/jenkins/m
inikube-integration/19651-1087/.minikube/machines/docker-flags-991000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-991000"}
	I0916 04:08:12.661180    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 547a9754-c90a-49b1-bc9c-f088fc72bf18 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/docker-flags-991000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags
-991000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-991000"
	I0916 04:08:12.661191    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:08:12.664298    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 DEBUG: hyperkit: Pid is 5778
	I0916 04:08:12.664738    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 0
	I0916 04:08:12.664753    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:12.664849    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:12.665826    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:12.665882    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:12.665904    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:12.665914    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:12.665931    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:12.665955    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:12.665981    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:12.665997    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:12.666009    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:12.666023    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:12.666042    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:12.666055    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:12.666063    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:12.666071    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:12.666077    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:12.666087    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:12.666096    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:12.671950    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:08:12.679791    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:08:12.680721    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:08:12.680743    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:08:12.680757    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:08:12.680767    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:08:13.055953    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:08:13.055968    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:08:13.171165    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:08:13.171186    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:08:13.171200    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:08:13.171215    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:08:13.172064    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:08:13.172080    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:08:14.667131    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 1
	I0916 04:08:14.667147    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:14.667192    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:14.668054    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:14.668096    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:14.668106    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:14.668115    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:14.668124    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:14.668131    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:14.668137    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:14.668143    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:14.668149    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:14.668155    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:14.668163    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:14.668169    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:14.668177    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:14.668183    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:14.668188    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:14.668206    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:14.668219    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:16.669176    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 2
	I0916 04:08:16.669194    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:16.669250    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:16.670141    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:16.670156    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:16.670162    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:16.670169    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:16.670189    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:16.670210    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:16.670221    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:16.670230    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:16.670239    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:16.670246    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:16.670260    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:16.670267    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:16.670274    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:16.670280    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:16.670284    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:16.670290    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:16.670298    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:18.554472    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 04:08:18.554636    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 04:08:18.554646    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 04:08:18.574258    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:08:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 04:08:18.672363    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 3
	I0916 04:08:18.672377    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:18.672443    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:18.673281    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:18.673338    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:18.673348    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:18.673366    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:18.673376    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:18.673382    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:18.673388    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:18.673413    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:18.673427    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:18.673451    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:18.673462    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:18.673475    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:18.673485    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:18.673499    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:18.673513    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:18.673521    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:18.673526    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:20.674341    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 4
	I0916 04:08:20.674358    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:20.674421    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:20.675221    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:20.675286    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:20.675301    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:20.675317    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:20.675325    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:20.675340    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:20.675350    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:20.675358    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:20.675367    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:20.675383    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:20.675393    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:20.675399    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:20.675406    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:20.675418    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:20.675429    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:20.675442    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:20.675449    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:22.676569    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 5
	I0916 04:08:22.676585    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:22.676644    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:22.677475    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:22.677488    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:22.677495    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:22.677500    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:22.677508    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:22.677513    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:22.677519    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:22.677524    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:22.677531    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:22.677536    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:22.677552    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:22.677565    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:22.677572    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:22.677580    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:22.677595    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:22.677607    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:22.677616    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:24.679658    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 6
	I0916 04:08:24.679672    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:24.679740    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:24.680519    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:24.680577    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:24.680588    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:24.680604    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:24.680610    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:24.680623    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:24.680629    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:24.680635    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:24.680643    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:24.680650    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:24.680658    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:24.680674    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:24.680687    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:24.680696    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:24.680703    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:24.680709    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:24.680717    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:26.680980    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 7
	I0916 04:08:26.680994    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:26.681004    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:26.681797    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:26.681826    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:26.681841    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:26.681848    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:26.681862    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:26.681869    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:26.681875    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:26.681883    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:26.681890    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:26.681898    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:26.681921    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:26.681932    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:26.681940    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:26.681946    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:26.681954    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:26.681969    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:26.681978    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:28.684048    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 8
	I0916 04:08:28.684065    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:28.684111    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:28.684903    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:28.684935    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:28.684953    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:28.684968    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:28.684989    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:28.684996    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:28.685016    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:28.685027    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:28.685034    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:28.685042    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:28.685048    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:28.685056    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:28.685063    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:28.685069    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:28.685074    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:28.685081    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:28.685087    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:30.687148    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 9
	I0916 04:08:30.687163    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:30.687215    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:30.687988    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:30.688036    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:30.688045    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:30.688055    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:30.688061    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:30.688067    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:30.688074    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:30.688080    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:30.688087    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:30.688106    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:30.688113    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:30.688120    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:30.688128    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:30.688136    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:30.688151    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:30.688163    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:30.688174    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:32.689687    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 10
	I0916 04:08:32.689709    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:32.689770    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:32.690534    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:32.690585    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:32.690596    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:32.690615    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:32.690629    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:32.690638    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:32.690646    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:32.690658    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:32.690672    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:32.690685    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:32.690695    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:32.690701    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:32.690708    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:32.690736    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:32.690747    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:32.690753    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:32.690763    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:34.690891    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 11
	I0916 04:08:34.690906    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:34.690974    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:34.691737    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:34.691796    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:34.691809    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:34.691821    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:34.691827    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:34.691845    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:34.691855    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:34.691869    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:34.691876    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:34.691882    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:34.691888    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:34.691894    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:34.691899    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:34.691917    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:34.691931    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:34.691939    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:34.691947    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:36.692026    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 12
	I0916 04:08:36.692041    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:36.692108    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:36.692914    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:36.692939    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:36.692947    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:36.692965    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:36.692978    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:36.692987    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:36.692998    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:36.693014    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:36.693031    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:36.693040    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:36.693048    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:36.693057    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:36.693064    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:36.693071    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:36.693077    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:36.693082    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:36.693090    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:38.694896    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 13
	I0916 04:08:38.694910    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:38.694974    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:38.695747    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:38.695798    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:38.695810    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:38.695818    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:38.695829    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:38.695835    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:38.695845    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:38.695850    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:38.695858    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:38.695863    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:38.695883    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:38.695896    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:38.695904    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:38.695912    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:38.695927    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:38.695938    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:38.695956    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:40.697799    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 14
	I0916 04:08:40.697812    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:40.697867    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:40.698642    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:40.698703    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:40.698717    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:40.698739    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:40.698761    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:40.698778    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:40.698788    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:40.698797    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:40.698806    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:40.698812    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:40.698830    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:40.698838    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:40.698844    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:40.698850    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:40.698861    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:40.698875    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:40.698885    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:42.700892    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 15
	I0916 04:08:42.700908    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:42.700981    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:42.701758    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:42.701813    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:42.701826    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:42.701834    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:42.701841    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:42.701847    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:42.701853    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:42.701873    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:42.701885    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:42.701906    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:42.701914    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:42.701921    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:42.701930    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:42.701936    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:42.701948    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:42.701960    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:42.701968    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:44.702303    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 16
	I0916 04:08:44.702316    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:44.702389    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:44.703161    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:44.703213    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:44.703226    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:44.703233    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:44.703240    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:44.703261    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:44.703282    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:44.703294    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:44.703304    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:44.703315    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:44.703323    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:44.703329    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:44.703335    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:44.703347    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:44.703359    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:44.703367    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:44.703374    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:46.705434    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 17
	I0916 04:08:46.705450    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:46.705461    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:46.706281    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:46.706300    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:46.706313    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:46.706322    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:46.706334    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:46.706349    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:46.706363    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:46.706371    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:46.706379    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:46.706386    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:46.706394    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:46.706401    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:46.706408    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:46.706418    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:46.706425    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:46.706439    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:46.706451    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:48.708496    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 18
	I0916 04:08:48.708512    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:48.708527    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:48.709341    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:48.709349    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:48.709364    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:48.709394    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:48.709411    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:48.709424    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:48.709432    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:48.709437    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:48.709457    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:48.709471    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:48.709480    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:48.709487    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:48.709494    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:48.709500    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:48.709508    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:48.709514    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:48.709522    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:50.711553    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 19
	I0916 04:08:50.711570    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:50.711624    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:50.712388    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:50.712440    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:50.712452    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:50.712460    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:50.712467    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:50.712481    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:50.712494    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:50.712504    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:50.712513    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:50.712528    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:50.712537    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:50.712543    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:50.712554    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:50.712562    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:50.712568    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:50.712574    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:50.712582    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:52.714614    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 20
	I0916 04:08:52.714629    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:52.714675    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:52.715474    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:52.715503    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:52.715513    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:52.715524    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:52.715530    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:52.715536    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:52.715543    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:52.715549    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:52.715556    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:52.715564    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:52.715572    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:52.715579    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:52.715585    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:52.715592    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:52.715598    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:52.715612    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:52.715623    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:54.717729    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 21
	I0916 04:08:54.717742    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:54.717777    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:54.718579    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:54.718632    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:54.718644    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:54.718655    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:54.718663    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:54.718670    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:54.718686    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:54.718698    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:54.718721    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:54.718732    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:54.718739    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:54.718745    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:54.718753    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:54.718766    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:54.718778    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:54.718790    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:54.718800    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:56.720881    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 22
	I0916 04:08:56.720896    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:56.720939    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:56.721775    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:56.721821    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:56.721831    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:56.721838    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:56.721845    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:56.721854    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:56.721871    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:56.721887    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:56.721901    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:56.721915    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:56.721928    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:56.721936    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:56.721942    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:56.721949    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:56.721963    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:56.721974    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:56.721982    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:58.724024    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 23
	I0916 04:08:58.724037    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:58.724077    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:08:58.724880    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:08:58.724930    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:58.724944    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:58.724952    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:58.724961    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:58.724971    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:58.724981    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:58.724993    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:58.725003    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:58.725023    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:58.725031    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:58.725038    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:58.725044    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:58.725057    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:58.725071    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:58.725080    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:58.725087    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:00.725597    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 24
	I0916 04:09:00.725612    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:00.725666    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:00.726750    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:09:00.726799    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:00.726808    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:00.726816    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:00.726822    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:00.726832    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:00.726837    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:00.726843    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:00.726849    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:00.726856    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:00.726861    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:00.726870    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:00.726892    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:00.726904    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:00.726911    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:00.726917    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:00.726952    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:02.729050    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 25
	I0916 04:09:02.729071    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:02.729111    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:02.729922    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:09:02.729966    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:02.729980    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:02.729997    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:02.730007    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:02.730021    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:02.730031    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:02.730048    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:02.730062    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:02.730070    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:02.730077    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:02.730084    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:02.730092    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:02.730099    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:02.730108    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:02.730115    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:02.730124    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:04.730581    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 26
	I0916 04:09:04.730594    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:04.730699    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:04.731475    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:09:04.731503    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:04.731527    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:04.731537    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:04.731545    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:04.731552    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:04.731558    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:04.731566    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:04.731589    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:04.731598    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:04.731605    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:04.731613    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:04.731631    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:04.731641    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:04.731648    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:04.731654    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:04.731665    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:06.733745    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 27
	I0916 04:09:06.733760    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:06.733812    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:06.734677    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:09:06.734727    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:06.734739    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:06.734758    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:06.734772    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:06.734785    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:06.734794    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:06.734800    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:06.734808    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:06.734817    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:06.734825    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:06.734836    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:06.734849    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:06.734860    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:06.734871    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:06.734878    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:06.734884    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:08.736937    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 28
	I0916 04:09:08.736952    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:08.736998    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:08.737781    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:09:08.737834    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:08.737842    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:08.737849    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:08.737855    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:08.737864    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:08.737871    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:08.737878    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:08.737884    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:08.737889    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:08.737896    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:08.737901    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:08.737908    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:08.737921    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:08.737940    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:08.737947    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:08.737955    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:10.738221    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 29
	I0916 04:09:10.738236    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:10.738290    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:10.739160    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 86:d8:21:2c:aa:6e in /var/db/dhcpd_leases ...
	I0916 04:09:10.739205    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:10.739216    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:10.739224    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:10.739231    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:10.739246    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:10.739258    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:10.739270    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:10.739275    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:10.739288    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:10.739295    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:10.739301    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:10.739306    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:10.739313    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:10.739321    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:10.739327    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:10.739332    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:12.741479    5706 client.go:171] duration metric: took 1m0.888997299s to LocalClient.Create
	I0916 04:09:14.741806    5706 start.go:128] duration metric: took 1m2.920842808s to createHost
	I0916 04:09:14.741821    5706 start.go:83] releasing machines lock for "docker-flags-991000", held for 1m2.92097193s
	W0916 04:09:14.741859    5706 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:d8:21:2c:aa:6e
	I0916 04:09:14.742239    5706 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:09:14.742265    5706 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:09:14.751065    5706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52722
	I0916 04:09:14.751454    5706 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:09:14.751835    5706 main.go:141] libmachine: Using API Version  1
	I0916 04:09:14.751852    5706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:09:14.752129    5706 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:09:14.752545    5706 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:09:14.752580    5706 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:09:14.761288    5706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52724
	I0916 04:09:14.761764    5706 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:09:14.762273    5706 main.go:141] libmachine: Using API Version  1
	I0916 04:09:14.762292    5706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:09:14.762617    5706 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:09:14.762775    5706 main.go:141] libmachine: (docker-flags-991000) Calling .GetState
	I0916 04:09:14.762880    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:14.762947    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:14.763934    5706 main.go:141] libmachine: (docker-flags-991000) Calling .DriverName
	I0916 04:09:14.785414    5706 out.go:177] * Deleting "docker-flags-991000" in hyperkit ...
	I0916 04:09:14.843143    5706 main.go:141] libmachine: (docker-flags-991000) Calling .Remove
	I0916 04:09:14.843267    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:14.843278    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:14.843349    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:14.844290    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:14.844361    5706 main.go:141] libmachine: (docker-flags-991000) DBG | waiting for graceful shutdown
	I0916 04:09:15.844506    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:15.844604    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:15.845505    5706 main.go:141] libmachine: (docker-flags-991000) DBG | waiting for graceful shutdown
	I0916 04:09:16.845972    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:16.846137    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:16.847710    5706 main.go:141] libmachine: (docker-flags-991000) DBG | waiting for graceful shutdown
	I0916 04:09:17.848509    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:17.848585    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:17.849147    5706 main.go:141] libmachine: (docker-flags-991000) DBG | waiting for graceful shutdown
	I0916 04:09:18.849537    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:18.849613    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:18.850232    5706 main.go:141] libmachine: (docker-flags-991000) DBG | waiting for graceful shutdown
	I0916 04:09:19.850442    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:19.850539    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5778
	I0916 04:09:19.851651    5706 main.go:141] libmachine: (docker-flags-991000) DBG | sending sigkill
	I0916 04:09:19.851661    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0916 04:09:19.862830    5706 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:d8:21:2c:aa:6e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:d8:21:2c:aa:6e
	I0916 04:09:19.862850    5706 start.go:729] Will try again in 5 seconds ...
	I0916 04:09:19.894647    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:09:19 WARN : hyperkit: failed to read stderr: EOF
	I0916 04:09:19.894668    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:09:19 WARN : hyperkit: failed to read stdout: EOF
	I0916 04:09:24.863920    5706 start.go:360] acquireMachinesLock for docker-flags-991000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:10:17.602601    5706 start.go:364] duration metric: took 52.737920914s to acquireMachinesLock for "docker-flags-991000"
	I0916 04:10:17.602626    5706 start.go:93] Provisioning new machine with config: &{Name:docker-flags-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:10:17.602698    5706 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:10:17.665985    5706 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:10:17.666067    5706 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:10:17.666089    5706 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:10:17.674616    5706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52728
	I0916 04:10:17.674962    5706 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:10:17.675312    5706 main.go:141] libmachine: Using API Version  1
	I0916 04:10:17.675322    5706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:10:17.675534    5706 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:10:17.675649    5706 main.go:141] libmachine: (docker-flags-991000) Calling .GetMachineName
	I0916 04:10:17.675731    5706 main.go:141] libmachine: (docker-flags-991000) Calling .DriverName
	I0916 04:10:17.675828    5706 start.go:159] libmachine.API.Create for "docker-flags-991000" (driver="hyperkit")
	I0916 04:10:17.675842    5706 client.go:168] LocalClient.Create starting
	I0916 04:10:17.675873    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:10:17.675926    5706 main.go:141] libmachine: Decoding PEM data...
	I0916 04:10:17.675936    5706 main.go:141] libmachine: Parsing certificate...
	I0916 04:10:17.675978    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:10:17.676015    5706 main.go:141] libmachine: Decoding PEM data...
	I0916 04:10:17.676028    5706 main.go:141] libmachine: Parsing certificate...
	I0916 04:10:17.676040    5706 main.go:141] libmachine: Running pre-create checks...
	I0916 04:10:17.676045    5706 main.go:141] libmachine: (docker-flags-991000) Calling .PreCreateCheck
	I0916 04:10:17.676114    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:17.676157    5706 main.go:141] libmachine: (docker-flags-991000) Calling .GetConfigRaw
	I0916 04:10:17.687037    5706 main.go:141] libmachine: Creating machine...
	I0916 04:10:17.687046    5706 main.go:141] libmachine: (docker-flags-991000) Calling .Create
	I0916 04:10:17.687134    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:17.687285    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:10:17.687129    5912 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:10:17.687325    5706 main.go:141] libmachine: (docker-flags-991000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:10:18.124669    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:10:18.124611    5912 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/id_rsa...
	I0916 04:10:18.271441    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:10:18.271364    5912 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/docker-flags-991000.rawdisk...
	I0916 04:10:18.271454    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Writing magic tar header
	I0916 04:10:18.271468    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Writing SSH key tar header
	I0916 04:10:18.292372    5706 main.go:141] libmachine: (docker-flags-991000) DBG | I0916 04:10:18.292324    5912 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000 ...
	I0916 04:10:18.666336    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:18.666355    5706 main.go:141] libmachine: (docker-flags-991000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/hyperkit.pid
	I0916 04:10:18.666366    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Using UUID 494d79ed-c069-4d6a-b648-ffd1e4eb9922
	I0916 04:10:18.691682    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Generated MAC 2e:3c:e4:d4:d5:14
	I0916 04:10:18.691709    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-991000
	I0916 04:10:18.691745    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"494d79ed-c069-4d6a-b648-ffd1e4eb9922", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001161b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0916 04:10:18.691776    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"494d79ed-c069-4d6a-b648-ffd1e4eb9922", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001161b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0916 04:10:18.691853    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "494d79ed-c069-4d6a-b648-ffd1e4eb9922", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/docker-flags-991000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage,/Users/jenkins/m
inikube-integration/19651-1087/.minikube/machines/docker-flags-991000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-991000"}
	I0916 04:10:18.691908    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 494d79ed-c069-4d6a-b648-ffd1e4eb9922 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/docker-flags-991000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags
-991000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-991000"
	I0916 04:10:18.691925    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:10:18.694770    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 DEBUG: hyperkit: Pid is 5926
	I0916 04:10:18.695797    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 0
	I0916 04:10:18.695813    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:18.695883    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:18.696832    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:18.696891    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:18.696922    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:18.696944    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:18.696959    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:18.696981    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:18.697007    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:18.697023    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:18.697033    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:18.697051    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:18.697081    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:18.697093    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:18.697102    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:18.697113    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:18.697122    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:18.697128    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:18.697138    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:18.702216    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:10:18.710112    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/docker-flags-991000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:10:18.711095    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:10:18.711114    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:10:18.711123    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:10:18.711132    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:10:19.086302    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:10:19.086317    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:10:19.201046    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:10:19.201069    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:10:19.201086    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:10:19.201100    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:10:19.201902    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:10:19.201913    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:10:20.697394    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 1
	I0916 04:10:20.697411    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:20.697453    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:20.698286    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:20.698317    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:20.698324    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:20.698333    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:20.698341    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:20.698350    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:20.698359    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:20.698366    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:20.698378    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:20.698387    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:20.698392    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:20.698405    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:20.698418    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:20.698443    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:20.698454    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:20.698464    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:20.698471    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:22.698632    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 2
	I0916 04:10:22.698658    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:22.698768    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:22.699591    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:22.699646    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:22.699656    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:22.699664    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:22.699671    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:22.699686    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:22.699694    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:22.699701    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:22.699708    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:22.699715    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:22.699722    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:22.699728    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:22.699735    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:22.699743    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:22.699751    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:22.699758    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:22.699764    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:24.589744    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 04:10:24.589895    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 04:10:24.589905    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 04:10:24.610052    5706 main.go:141] libmachine: (docker-flags-991000) DBG | 2024/09/16 04:10:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 04:10:24.700793    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 3
	I0916 04:10:24.700813    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:24.701087    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:24.702130    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:24.702206    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:24.702219    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:24.702231    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:24.702240    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:24.702255    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:24.702270    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:24.702279    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:24.702287    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:24.702295    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:24.702324    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:24.702352    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:24.702367    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:24.702378    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:24.702386    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:24.702405    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:24.702423    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:26.703201    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 4
	I0916 04:10:26.703217    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:26.703318    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:26.704135    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:26.704169    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:26.704177    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:26.704201    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:26.704212    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:26.704227    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:26.704234    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:26.704241    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:26.704247    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:26.704252    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:26.704261    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:26.704269    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:26.704293    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:26.704307    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:26.704320    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:26.704327    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:26.704334    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:28.704551    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 5
	I0916 04:10:28.704562    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:28.704630    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:28.705449    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:28.705486    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:28.705501    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:28.705537    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:28.705551    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:28.705558    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:28.705566    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:28.705578    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:28.705588    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:28.705602    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:28.705610    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:28.705616    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:28.705624    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:28.705630    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:28.705636    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:28.705649    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:28.705662    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:30.707697    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 6
	I0916 04:10:30.707710    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:30.707750    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:30.708549    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:30.708603    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:30.708613    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:30.708621    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:30.708628    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:30.708639    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:30.708645    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:30.708651    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:30.708666    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:30.708676    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:30.708683    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:30.708691    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:30.708700    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:30.708707    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:30.708715    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:30.708724    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:30.708732    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:32.710274    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 7
	I0916 04:10:32.710289    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:32.710342    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:32.711104    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:32.711149    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:32.711158    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:32.711169    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:32.711180    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:32.711188    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:32.711195    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:32.711201    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:32.711207    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:32.711214    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:32.711221    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:32.711230    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:32.711238    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:32.711255    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:32.711263    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:32.711280    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:32.711292    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:34.713352    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 8
	I0916 04:10:34.713366    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:34.713431    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:34.714230    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:34.714248    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:34.714271    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:34.714296    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:34.714316    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:34.714325    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:34.714333    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:34.714340    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:34.714359    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:34.714372    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:34.714382    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:34.714393    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:34.714401    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:34.714407    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:34.714414    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:34.714419    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:34.714432    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:36.715003    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 9
	I0916 04:10:36.715018    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:36.715092    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:36.715902    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:36.715924    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:36.715936    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:36.715957    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:36.715971    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:36.715977    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:36.715984    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:36.715989    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:36.715995    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:36.716003    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:36.716022    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:36.716030    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:36.716045    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:36.716056    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:36.716086    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:36.716096    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:36.716105    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:38.718206    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 10
	I0916 04:10:38.718220    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:38.718267    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:38.719057    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:38.719113    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:38.719121    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:38.719131    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:38.719137    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:38.719144    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:38.719158    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:38.719175    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:38.719186    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:38.719200    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:38.719208    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:38.719216    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:38.719222    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:38.719231    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:38.719239    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:38.719246    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:38.719254    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:40.720250    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 11
	I0916 04:10:40.720264    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:40.720331    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:40.721100    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:40.721136    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:40.721156    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:40.721173    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:40.721186    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:40.721196    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:40.721205    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:40.721212    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:40.721223    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:40.721230    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:40.721237    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:40.721247    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:40.721255    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:40.721262    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:40.721268    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:40.721287    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:40.721300    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:42.722596    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 12
	I0916 04:10:42.722608    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:42.722688    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:42.723466    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:42.723517    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:42.723527    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:42.723544    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:42.723555    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:42.723565    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:42.723574    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:42.723582    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:42.723590    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:42.723597    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:42.723618    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:42.723632    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:42.723640    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:42.723651    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:42.723660    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:42.723669    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:42.723677    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:44.725697    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 13
	I0916 04:10:44.725727    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:44.725787    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:44.726555    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:44.726619    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:44.726631    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:44.726647    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:44.726656    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:44.726663    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:44.726669    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:44.726676    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:44.726682    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:44.726688    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:44.726698    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:44.726715    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:44.726735    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:44.726742    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:44.726748    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:44.726754    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:44.726762    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:46.726918    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 14
	I0916 04:10:46.726933    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:46.726987    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:46.727756    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:46.727824    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:46.727840    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:46.727856    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:46.727866    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:46.727873    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:46.727879    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:46.727885    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:46.727892    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:46.727907    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:46.727920    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:46.727936    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:46.727948    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:46.727956    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:46.727961    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:46.727967    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:46.727979    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:48.728918    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 15
	I0916 04:10:48.728932    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:48.728964    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:48.729743    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:48.729798    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:48.729810    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:48.729826    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:48.729836    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:48.729850    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:48.729858    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:48.729865    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:48.729873    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:48.729880    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:48.729886    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:48.729899    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:48.729916    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:48.729926    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:48.729935    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:48.729947    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:48.729955    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:50.731932    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 16
	I0916 04:10:50.731946    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:50.731988    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:50.732778    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:50.732831    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:50.732844    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:50.732869    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:50.732879    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:50.732889    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:50.732895    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:50.732910    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:50.732922    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:50.732937    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:50.732952    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:50.732974    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:50.732985    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:50.732993    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:50.733011    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:50.733018    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:50.733026    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:52.735024    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 17
	I0916 04:10:52.735039    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:52.735143    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:52.735909    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:52.735922    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:52.735931    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:52.735937    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:52.735945    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:52.735951    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:52.735964    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:52.735973    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:52.735988    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:52.736010    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:52.736020    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:52.736031    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:52.736041    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:52.736067    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:52.736078    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:52.736086    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:52.736094    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:54.736745    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 18
	I0916 04:10:54.736763    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:54.736841    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:54.737652    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:54.737675    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:54.737683    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:54.737714    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:54.737724    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:54.737730    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:54.737740    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:54.737753    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:54.737761    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:54.737777    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:54.737789    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:54.737797    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:54.737802    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:54.737819    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:54.737831    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:54.737839    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:54.737848    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:56.739926    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 19
	I0916 04:10:56.739939    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:56.739978    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:56.740749    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:56.740799    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:56.740808    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:56.740817    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:56.740824    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:56.740830    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:56.740835    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:56.740875    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:56.740903    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:56.740911    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:56.740918    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:56.740925    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:56.740933    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:56.740946    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:56.740953    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:56.740960    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:56.740976    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:58.743043    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 20
	I0916 04:10:58.743055    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:58.743113    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:10:58.744169    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:10:58.744206    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:58.744216    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:58.744227    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:58.744271    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:58.744281    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:58.744287    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:58.744319    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:58.744329    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:58.744345    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:58.744357    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:58.744366    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:58.744375    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:58.744382    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:58.744390    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:58.744396    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:58.744404    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:00.746411    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 21
	I0916 04:11:00.746429    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:00.746482    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:00.747272    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:00.747308    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:00.747323    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:00.747344    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:00.747355    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:00.747362    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:00.747369    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:00.747381    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:00.747402    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:00.747410    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:00.747439    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:00.747450    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:00.747457    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:00.747465    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:00.747473    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:00.747487    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:00.747499    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:02.749468    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 22
	I0916 04:11:02.749482    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:02.749530    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:02.750363    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:02.750425    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:02.750437    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:02.750459    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:02.750473    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:02.750485    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:02.750493    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:02.750500    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:02.750507    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:02.750514    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:02.750522    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:02.750530    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:02.750538    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:02.750550    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:02.750560    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:02.750568    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:02.750579    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:04.750643    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 23
	I0916 04:11:04.750658    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:04.750717    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:04.751583    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:04.751611    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:04.751625    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:04.751639    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:04.751651    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:04.751663    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:04.751672    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:04.751689    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:04.751702    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:04.751710    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:04.751718    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:04.751738    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:04.751751    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:04.751769    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:04.751781    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:04.751789    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:04.751797    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:06.753033    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 24
	I0916 04:11:06.753047    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:06.753078    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:06.753863    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:06.753907    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:06.753923    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:06.753933    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:06.753940    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:06.753954    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:06.753966    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:06.753980    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:06.753989    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:06.753996    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:06.754002    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:06.754014    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:06.754026    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:06.754043    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:06.754055    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:06.754063    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:06.754070    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:08.756120    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 25
	I0916 04:11:08.756140    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:08.756198    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:08.757088    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:08.757131    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:08.757142    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:08.757151    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:08.757167    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:08.757175    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:08.757184    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:08.757191    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:08.757199    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:08.757204    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:08.757212    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:08.757219    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:08.757252    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:08.757260    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:08.757268    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:08.757286    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:08.757301    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:10.758080    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 26
	I0916 04:11:10.758094    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:10.758162    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:10.758933    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:10.758971    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:10.758986    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:10.758996    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:10.759008    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:10.759018    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:10.759028    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:10.759037    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:10.759058    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:10.759068    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:10.759075    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:10.759083    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:10.759097    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:10.759110    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:10.759119    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:10.759126    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:10.759134    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:12.759778    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 27
	I0916 04:11:12.759793    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:12.759855    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:12.760664    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:12.760688    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:12.760700    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:12.760729    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:12.760751    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:12.760761    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:12.760768    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:12.760774    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:12.760788    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:12.760801    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:12.760820    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:12.760827    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:12.760835    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:12.760843    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:12.760849    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:12.760857    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:12.760865    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:14.761266    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 28
	I0916 04:11:14.761661    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:14.761743    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:14.762261    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:14.762289    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:14.762310    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:14.762324    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:14.762331    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:14.762376    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:14.762393    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:14.762447    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:14.762552    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:14.762572    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:14.762587    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:14.762605    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:14.762629    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:14.762642    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:14.762652    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:14.762667    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:14.762679    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:16.763677    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Attempt 29
	I0916 04:11:16.763696    5706 main.go:141] libmachine: (docker-flags-991000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:11:16.763708    5706 main.go:141] libmachine: (docker-flags-991000) DBG | hyperkit pid from json: 5926
	I0916 04:11:16.764503    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Searching for 2e:3c:e4:d4:d5:14 in /var/db/dhcpd_leases ...
	I0916 04:11:16.764543    5706 main.go:141] libmachine: (docker-flags-991000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:11:16.764555    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:11:16.764568    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:11:16.764577    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:11:16.764585    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:11:16.764592    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:11:16.764597    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:11:16.764604    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:11:16.764610    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:11:16.764615    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:11:16.764652    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:11:16.764667    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:11:16.764676    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:11:16.764683    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:11:16.764689    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:11:16.764696    5706 main.go:141] libmachine: (docker-flags-991000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:11:18.765801    5706 client.go:171] duration metric: took 1m1.08909676s to LocalClient.Create
	I0916 04:11:20.767907    5706 start.go:128] duration metric: took 1m3.164316603s to createHost
	I0916 04:11:20.767923    5706 start.go:83] releasing machines lock for "docker-flags-991000", held for 1m3.164427872s
	W0916 04:11:20.767998    5706 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-991000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:3c:e4:d4:d5:14
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-991000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:3c:e4:d4:d5:14
	I0916 04:11:20.852248    5706 out.go:201] 
	W0916 04:11:20.873305    5706 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:3c:e4:d4:d5:14
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:3c:e4:d4:d5:14
	W0916 04:11:20.873320    5706 out.go:270] * 
	* 
	W0916 04:11:20.873946    5706 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:11:20.936254    5706 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-991000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-991000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-991000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (182.164326ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-991000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-991000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-991000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-991000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (167.898756ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-991000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-991000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-991000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-16 04:11:21.383137 -0700 PDT m=+3090.044114715
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-991000 -n docker-flags-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-991000 -n docker-flags-991000: exit status 7 (86.51063ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:11:21.467499    5989 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:11:21.467517    5989 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-991000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-991000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-991000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-991000: (5.241215488s)
--- FAIL: TestDockerFlags (252.27s)

                                                
                                    
x
+
TestForceSystemdFlag (252.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-218000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-218000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.514688198s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-218000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-218000" primary control-plane node in "force-systemd-flag-218000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-218000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:06:11.366191    5638 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:06:11.366361    5638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:06:11.366366    5638 out.go:358] Setting ErrFile to fd 2...
	I0916 04:06:11.366370    5638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:06:11.366543    5638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 04:06:11.367993    5638 out.go:352] Setting JSON to false
	I0916 04:06:11.390566    5638 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3941,"bootTime":1726480830,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 04:06:11.390711    5638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:06:11.412511    5638 out.go:177] * [force-systemd-flag-218000] minikube v1.34.0 on Darwin 14.6.1
	I0916 04:06:11.454931    5638 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:06:11.454975    5638 notify.go:220] Checking for updates...
	I0916 04:06:11.496895    5638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 04:06:11.517819    5638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 04:06:11.538861    5638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:06:11.559815    5638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:06:11.580798    5638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:06:11.602392    5638 config.go:182] Loaded profile config "force-systemd-env-471000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:06:11.602494    5638 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:06:11.630863    5638 out.go:177] * Using the hyperkit driver based on user configuration
	I0916 04:06:11.672919    5638 start.go:297] selected driver: hyperkit
	I0916 04:06:11.672934    5638 start.go:901] validating driver "hyperkit" against <nil>
	I0916 04:06:11.672947    5638 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:06:11.675834    5638 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:06:11.675957    5638 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 04:06:11.684534    5638 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 04:06:11.688693    5638 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:06:11.688714    5638 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 04:06:11.688746    5638 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:06:11.688991    5638 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 04:06:11.689017    5638 cni.go:84] Creating CNI manager for ""
	I0916 04:06:11.689063    5638 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:06:11.689072    5638 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:06:11.689140    5638 start.go:340] cluster config:
	{Name:force-systemd-flag-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:06:11.689235    5638 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:06:11.730761    5638 out.go:177] * Starting "force-systemd-flag-218000" primary control-plane node in "force-systemd-flag-218000" cluster
	I0916 04:06:11.751795    5638 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:06:11.751829    5638 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 04:06:11.751842    5638 cache.go:56] Caching tarball of preloaded images
	I0916 04:06:11.751969    5638 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 04:06:11.751979    5638 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:06:11.752053    5638 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/force-systemd-flag-218000/config.json ...
	I0916 04:06:11.752079    5638 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/force-systemd-flag-218000/config.json: {Name:mk90cf0dc10babcdc64cac82ade48972ad630e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:06:11.752392    5638 start.go:360] acquireMachinesLock for force-systemd-flag-218000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:07:08.686691    5638 start.go:364] duration metric: took 56.933487768s to acquireMachinesLock for "force-systemd-flag-218000"
	I0916 04:07:08.686732    5638 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:07:08.686803    5638 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:07:08.708494    5638 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:07:08.708705    5638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:07:08.708753    5638 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:07:08.717767    5638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52700
	I0916 04:07:08.718221    5638 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:07:08.718776    5638 main.go:141] libmachine: Using API Version  1
	I0916 04:07:08.718787    5638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:07:08.719227    5638 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:07:08.719411    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .GetMachineName
	I0916 04:07:08.719517    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .DriverName
	I0916 04:07:08.719630    5638 start.go:159] libmachine.API.Create for "force-systemd-flag-218000" (driver="hyperkit")
	I0916 04:07:08.719652    5638 client.go:168] LocalClient.Create starting
	I0916 04:07:08.719683    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:07:08.719733    5638 main.go:141] libmachine: Decoding PEM data...
	I0916 04:07:08.719748    5638 main.go:141] libmachine: Parsing certificate...
	I0916 04:07:08.719804    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:07:08.719844    5638 main.go:141] libmachine: Decoding PEM data...
	I0916 04:07:08.719856    5638 main.go:141] libmachine: Parsing certificate...
	I0916 04:07:08.719868    5638 main.go:141] libmachine: Running pre-create checks...
	I0916 04:07:08.719881    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .PreCreateCheck
	I0916 04:07:08.719963    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:08.720117    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .GetConfigRaw
	I0916 04:07:08.750268    5638 main.go:141] libmachine: Creating machine...
	I0916 04:07:08.750275    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .Create
	I0916 04:07:08.750383    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:08.750506    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:07:08.750375    5687 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:07:08.750664    5638 main.go:141] libmachine: (force-systemd-flag-218000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:07:09.180934    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:07:09.180876    5687 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/id_rsa...
	I0916 04:07:09.323286    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:07:09.323218    5687 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/force-systemd-flag-218000.rawdisk...
	I0916 04:07:09.323300    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Writing magic tar header
	I0916 04:07:09.323315    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Writing SSH key tar header
	I0916 04:07:09.343545    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:07:09.343515    5687 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000 ...
	I0916 04:07:09.717728    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:09.717744    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/hyperkit.pid
	I0916 04:07:09.717762    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Using UUID f2ba085c-0fa9-40d3-b5c3-67365123560e
	I0916 04:07:09.743417    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Generated MAC ae:69:84:40:1:e8
	I0916 04:07:09.743444    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-218000
	I0916 04:07:09.743494    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2ba085c-0fa9-40d3-b5c3-67365123560e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:07:09.743541    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f2ba085c-0fa9-40d3-b5c3-67365123560e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:07:09.743608    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f2ba085c-0fa9-40d3-b5c3-67365123560e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/force-systemd-flag-218000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/fo
rce-systemd-flag-218000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-218000"}
	I0916 04:07:09.743667    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f2ba085c-0fa9-40d3-b5c3-67365123560e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/force-systemd-flag-218000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/bzimage,/Users/jenkins/minikube-integr
ation/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-218000"
	I0916 04:07:09.743701    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:07:09.746513    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 DEBUG: hyperkit: Pid is 5701
	I0916 04:07:09.747557    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 0
	I0916 04:07:09.747594    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:09.747705    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:09.748707    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:09.748751    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:09.748777    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:09.748791    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:09.748807    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:09.748821    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:09.748842    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:09.748879    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:09.748895    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:09.748912    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:09.748925    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:09.748950    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:09.748980    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:09.748999    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:09.749014    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:09.749027    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:09.749041    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:09.754125    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:07:09.761966    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:07:09.762736    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:07:09.762763    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:07:09.762781    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:07:09.762793    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:07:10.136244    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:07:10.136259    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:07:10.250885    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:07:10.250908    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:07:10.250965    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:07:10.250980    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:07:10.251771    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:07:10.251784    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:07:11.749388    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 1
	I0916 04:07:11.749403    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:11.749434    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:11.750331    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:11.750369    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:11.750378    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:11.750396    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:11.750414    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:11.750430    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:11.750445    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:11.750453    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:11.750461    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:11.750467    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:11.750480    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:11.750488    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:11.750495    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:11.750504    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:11.750512    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:11.750524    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:11.750534    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:13.751763    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 2
	I0916 04:07:13.751782    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:13.751897    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:13.752689    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:13.752741    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:13.752749    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:13.752762    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:13.752779    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:13.752787    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:13.752794    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:13.752800    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:13.752806    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:13.752813    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:13.752818    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:13.752826    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:13.752833    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:13.752838    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:13.752881    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:13.752894    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:13.752908    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:15.654222    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 04:07:15.654339    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 04:07:15.654348    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 04:07:15.674663    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:07:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 04:07:15.755063    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 3
	I0916 04:07:15.755090    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:15.755251    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:15.756722    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:15.756830    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:15.756850    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:15.756873    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:15.756887    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:15.756922    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:15.756952    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:15.756970    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:15.756991    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:15.757031    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:15.757062    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:15.757087    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:15.757123    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:15.757147    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:15.757163    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:15.757180    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:15.757196    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:17.758181    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 4
	I0916 04:07:17.758198    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:17.758288    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:17.759086    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:17.759131    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:17.759142    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:17.759152    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:17.759160    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:17.759167    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:17.759176    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:17.759183    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:17.759195    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:17.759204    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:17.759213    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:17.759219    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:17.759228    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:17.759235    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:17.759243    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:17.759257    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:17.759279    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:19.759694    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 5
	I0916 04:07:19.759709    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:19.759779    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:19.760567    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:19.760615    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:19.760624    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:19.760637    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:19.760651    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:19.760660    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:19.760669    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:19.760676    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:19.760682    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:19.760706    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:19.760718    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:19.760727    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:19.760735    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:19.760742    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:19.760750    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:19.760761    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:19.760769    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:21.761215    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 6
	I0916 04:07:21.761231    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:21.761282    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:21.762065    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:21.762106    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:21.762114    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:21.762126    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:21.762133    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:21.762139    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:21.762147    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:21.762153    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:21.762159    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:21.762166    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:21.762172    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:21.762179    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:21.762187    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:21.762204    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:21.762213    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:21.762219    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:21.762233    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:23.763091    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 7
	I0916 04:07:23.763103    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:23.763168    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:23.763962    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:23.764005    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:23.764022    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:23.764042    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:23.764054    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:23.764070    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:23.764079    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:23.764086    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:23.764094    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:23.764110    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:23.764128    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:23.764138    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:23.764160    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:23.764173    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:23.764182    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:23.764189    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:23.764196    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:25.764515    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 8
	I0916 04:07:25.764530    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:25.764605    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:25.765367    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:25.765408    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:25.765419    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:25.765434    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:25.765445    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:25.765454    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:25.765461    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:25.765466    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:25.765489    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:25.765503    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:25.765517    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:25.765529    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:25.765536    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:25.765544    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:25.765550    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:25.765565    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:25.765575    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:27.765889    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 9
	I0916 04:07:27.765905    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:27.765990    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:27.766741    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:27.766795    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:27.766808    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:27.766817    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:27.766823    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:27.766849    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:27.766859    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:27.766867    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:27.766898    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:27.766909    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:27.766915    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:27.766921    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:27.766927    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:27.766933    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:27.766941    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:27.766949    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:27.766959    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:29.769027    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 10
	I0916 04:07:29.769039    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:29.769080    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:29.769885    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:29.769902    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:29.769914    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:29.769925    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:29.769931    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:29.769948    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:29.769954    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:29.769960    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:29.769967    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:29.769973    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:29.769982    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:29.769993    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:29.770001    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:29.770008    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:29.770015    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:29.770027    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:29.770037    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:31.772090    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 11
	I0916 04:07:31.772105    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:31.772164    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:31.772978    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:31.773020    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:31.773029    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:31.773037    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:31.773044    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:31.773060    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:31.773071    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:31.773087    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:31.773099    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:31.773111    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:31.773128    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:31.773143    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:31.773157    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:31.773165    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:31.773175    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:31.773191    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:31.773203    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:33.775137    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 12
	I0916 04:07:33.775150    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:33.775204    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:33.776011    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:33.776054    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:33.776062    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:33.776074    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:33.776081    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:33.776088    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:33.776093    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:33.776117    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:33.776137    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:33.776145    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:33.776152    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:33.776160    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:33.776169    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:33.776176    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:33.776183    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:33.776191    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:33.776199    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:35.778274    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 13
	I0916 04:07:35.778286    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:35.778314    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:35.779121    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:35.779178    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:35.779189    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:35.779196    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:35.779202    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:35.779222    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:35.779231    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:35.779237    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:35.779243    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:35.779256    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:35.779268    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:35.779276    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:35.779282    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:35.779289    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:35.779296    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:35.779303    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:35.779308    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:37.781379    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 14
	I0916 04:07:37.781396    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:37.781459    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:37.782231    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:37.782274    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:37.782285    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:37.782295    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:37.782302    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:37.782308    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:37.782315    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:37.782321    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:37.782329    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:37.782364    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:37.782376    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:37.782384    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:37.782392    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:37.782401    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:37.782408    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:37.782424    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:37.782438    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:39.783578    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 15
	I0916 04:07:39.783595    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:39.783647    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:39.784433    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:39.784488    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:39.784500    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:39.784519    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:39.784532    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:39.784548    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:39.784566    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:39.784581    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:39.784592    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:39.784600    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:39.784608    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:39.784614    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:39.784622    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:39.784636    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:39.784645    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:39.784652    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:39.784660    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:41.786678    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 16
	I0916 04:07:41.786693    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:41.786772    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:41.787594    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:41.787605    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:41.787613    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:41.787620    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:41.787628    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:41.787640    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:41.787652    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:41.787662    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:41.787669    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:41.787676    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:41.787694    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:41.787706    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:41.787714    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:41.787721    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:41.787728    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:41.787737    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:41.787752    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:43.789437    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 17
	I0916 04:07:43.789449    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:43.789512    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:43.790357    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:43.790391    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:43.790406    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:43.790421    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:43.790433    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:43.790461    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:43.790474    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:43.790482    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:43.790488    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:43.790494    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:43.790500    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:43.790509    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:43.790524    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:43.790537    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:43.790545    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:43.790553    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:43.790564    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:45.790700    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 18
	I0916 04:07:45.790714    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:45.790756    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:45.791579    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:45.791607    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:45.791620    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:45.791640    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:45.791654    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:45.791662    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:45.791667    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:45.791673    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:45.791679    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:45.791690    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:45.791699    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:45.791708    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:45.791714    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:45.791733    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:45.791746    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:45.791755    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:45.791761    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:47.791862    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 19
	I0916 04:07:47.791877    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:47.791944    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:47.792748    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:47.792791    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:47.792805    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:47.792822    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:47.792847    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:47.792874    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:47.792883    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:47.792891    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:47.792896    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:47.792903    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:47.792909    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:47.792917    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:47.792923    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:47.792931    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:47.792940    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:47.792946    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:47.792955    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:49.793036    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 20
	I0916 04:07:49.793049    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:49.793120    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:49.793904    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:49.793933    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:49.793942    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:49.793966    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:49.793973    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:49.793983    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:49.793989    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:49.793996    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:49.794001    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:49.794010    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:49.794016    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:49.794025    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:49.794032    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:49.794038    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:49.794045    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:49.794052    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:49.794061    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:51.796171    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 21
	I0916 04:07:51.796188    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:51.796221    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:51.796988    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:51.797028    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:51.797042    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:51.797052    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:51.797058    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:51.797075    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:51.797083    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:51.797090    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:51.797096    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:51.797102    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:51.797112    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:51.797120    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:51.797126    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:51.797132    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:51.797140    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:51.797156    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:51.797175    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:53.797365    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 22
	I0916 04:07:53.797377    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:53.797450    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:53.798241    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:53.798295    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:53.798303    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:53.798311    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:53.798318    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:53.798324    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:53.798331    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:53.798337    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:53.798351    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:53.798358    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:53.798365    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:53.798384    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:53.798396    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:53.798405    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:53.798414    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:53.798424    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:53.798433    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:55.799521    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 23
	I0916 04:07:55.799534    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:55.799608    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:55.800446    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:55.800496    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:55.800505    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:55.800514    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:55.800521    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:55.800527    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:55.800532    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:55.800546    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:55.800561    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:55.800571    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:55.800580    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:55.800592    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:55.800600    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:55.800619    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:55.800627    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:55.800646    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:55.800655    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:57.801050    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 24
	I0916 04:07:57.801063    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:57.801120    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:57.801931    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:57.801938    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:57.801947    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:57.801953    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:57.801959    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:57.801964    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:57.801970    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:57.801979    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:57.801987    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:57.801994    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:57.802011    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:57.802020    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:57.802032    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:57.802040    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:57.802049    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:57.802057    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:57.802065    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:59.802411    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 25
	I0916 04:07:59.802423    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:59.802481    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:07:59.803272    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:07:59.803330    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:59.803343    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:59.803353    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:59.803359    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:59.803366    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:59.803374    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:59.803381    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:59.803387    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:59.803394    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:59.803410    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:59.803420    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:59.803432    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:59.803440    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:59.803445    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:59.803451    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:59.803460    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:01.805514    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 26
	I0916 04:08:01.805528    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:01.805609    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:01.806410    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:08:01.806424    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:01.806437    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:01.806446    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:01.806468    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:01.806476    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:01.806482    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:01.806498    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:01.806520    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:01.806529    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:01.806536    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:01.806545    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:01.806552    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:01.806559    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:01.806576    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:01.806589    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:01.806602    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:03.808616    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 27
	I0916 04:08:03.808629    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:03.808673    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:03.809440    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:08:03.809498    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:03.809508    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:03.809516    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:03.809526    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:03.809537    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:03.809544    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:03.809551    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:03.809560    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:03.809573    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:03.809581    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:03.809589    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:03.809595    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:03.809602    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:03.809609    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:03.809632    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:03.809643    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:05.811723    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 28
	I0916 04:08:05.811753    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:05.811772    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:05.812566    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:08:05.812609    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:05.812621    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:05.812639    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:05.812650    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:05.812658    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:05.812666    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:05.812672    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:05.812679    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:05.812694    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:05.812706    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:05.812721    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:05.812729    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:05.812736    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:05.812743    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:05.812753    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:05.812763    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:07.814639    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 29
	I0916 04:08:07.814652    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:07.814709    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:07.815512    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for ae:69:84:40:1:e8 in /var/db/dhcpd_leases ...
	I0916 04:08:07.815533    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:08:07.815548    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:08:07.815556    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:08:07.815563    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:08:07.815570    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:08:07.815594    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:08:07.815606    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:08:07.815614    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:08:07.815622    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:08:07.815637    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:08:07.815652    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:08:07.815664    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:08:07.815672    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:08:07.815681    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:08:07.815692    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:08:07.815702    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:08:09.817775    5638 client.go:171] duration metric: took 1m1.097255537s to LocalClient.Create
	I0916 04:08:11.819864    5638 start.go:128] duration metric: took 1m3.132163661s to createHost
	I0916 04:08:11.819882    5638 start.go:83] releasing machines lock for "force-systemd-flag-218000", held for 1m3.132296555s
	W0916 04:08:11.819898    5638 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:69:84:40:1:e8
	I0916 04:08:11.820240    5638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:08:11.820260    5638 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:08:11.828886    5638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52716
	I0916 04:08:11.829233    5638 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:08:11.829589    5638 main.go:141] libmachine: Using API Version  1
	I0916 04:08:11.829599    5638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:08:11.829798    5638 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:08:11.830247    5638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:08:11.830268    5638 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:08:11.838623    5638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52718
	I0916 04:08:11.838963    5638 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:08:11.839288    5638 main.go:141] libmachine: Using API Version  1
	I0916 04:08:11.839297    5638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:08:11.839507    5638 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:08:11.839617    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .GetState
	I0916 04:08:11.839701    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:11.839773    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:11.840795    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .DriverName
	I0916 04:08:11.883191    5638 out.go:177] * Deleting "force-systemd-flag-218000" in hyperkit ...
	I0916 04:08:11.925295    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .Remove
	I0916 04:08:11.925480    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:11.925496    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:11.925507    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:11.926470    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:11.926523    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | waiting for graceful shutdown
	I0916 04:08:12.928504    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:12.928627    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:12.929551    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | waiting for graceful shutdown
	I0916 04:08:13.929771    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:13.929860    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:13.931534    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | waiting for graceful shutdown
	I0916 04:08:14.931762    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:14.931827    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:14.932549    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | waiting for graceful shutdown
	I0916 04:08:15.934710    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:15.934786    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:15.935335    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | waiting for graceful shutdown
	I0916 04:08:16.935776    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:08:16.935869    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5701
	I0916 04:08:16.936784    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | sending sigkill
	I0916 04:08:16.936794    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0916 04:08:16.946418    5638 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:69:84:40:1:e8
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:69:84:40:1:e8
	I0916 04:08:16.946435    5638 start.go:729] Will try again in 5 seconds ...
	I0916 04:08:16.977620    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:08:16 WARN : hyperkit: failed to read stderr: EOF
	I0916 04:08:16.977644    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:08:16 WARN : hyperkit: failed to read stdout: EOF
	I0916 04:08:21.947492    5638 start.go:360] acquireMachinesLock for force-systemd-flag-218000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:09:14.741912    5638 start.go:364] duration metric: took 52.793659717s to acquireMachinesLock for "force-systemd-flag-218000"
	I0916 04:09:14.741934    5638 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:09:14.742002    5638 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:09:14.763433    5638 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:09:14.763518    5638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:09:14.763543    5638 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:09:14.772104    5638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52726
	I0916 04:09:14.772578    5638 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:09:14.772980    5638 main.go:141] libmachine: Using API Version  1
	I0916 04:09:14.773016    5638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:09:14.773371    5638 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:09:14.773528    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .GetMachineName
	I0916 04:09:14.773603    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .DriverName
	I0916 04:09:14.773713    5638 start.go:159] libmachine.API.Create for "force-systemd-flag-218000" (driver="hyperkit")
	I0916 04:09:14.773732    5638 client.go:168] LocalClient.Create starting
	I0916 04:09:14.773756    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:09:14.773806    5638 main.go:141] libmachine: Decoding PEM data...
	I0916 04:09:14.773819    5638 main.go:141] libmachine: Parsing certificate...
	I0916 04:09:14.773862    5638 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:09:14.773902    5638 main.go:141] libmachine: Decoding PEM data...
	I0916 04:09:14.773915    5638 main.go:141] libmachine: Parsing certificate...
	I0916 04:09:14.773930    5638 main.go:141] libmachine: Running pre-create checks...
	I0916 04:09:14.773936    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .PreCreateCheck
	I0916 04:09:14.774042    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:14.774053    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .GetConfigRaw
	I0916 04:09:14.843143    5638 main.go:141] libmachine: Creating machine...
	I0916 04:09:14.843156    5638 main.go:141] libmachine: (force-systemd-flag-218000) Calling .Create
	I0916 04:09:14.843246    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:14.843392    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:09:14.843247    5840 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:09:14.843459    5638 main.go:141] libmachine: (force-systemd-flag-218000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:09:15.028666    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:09:15.028563    5840 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/id_rsa...
	I0916 04:09:15.128285    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:09:15.128207    5840 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/force-systemd-flag-218000.rawdisk...
	I0916 04:09:15.128294    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Writing magic tar header
	I0916 04:09:15.128304    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Writing SSH key tar header
	I0916 04:09:15.128874    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | I0916 04:09:15.128827    5840 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000 ...
	I0916 04:09:15.504446    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:15.504464    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/hyperkit.pid
	I0916 04:09:15.504502    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Using UUID 2f8ab3f6-b0a3-49fe-90b8-e67e8b439b4a
	I0916 04:09:15.529418    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Generated MAC 9a:be:e0:20:db:f3
	I0916 04:09:15.529436    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-218000
	I0916 04:09:15.529482    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f8ab3f6-b0a3-49fe-90b8-e67e8b439b4a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:09:15.529512    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f8ab3f6-b0a3-49fe-90b8-e67e8b439b4a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:09:15.529565    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f8ab3f6-b0a3-49fe-90b8-e67e8b439b4a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/force-systemd-flag-218000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/fo
rce-systemd-flag-218000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-218000"}
	I0916 04:09:15.529613    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f8ab3f6-b0a3-49fe-90b8-e67e8b439b4a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/force-systemd-flag-218000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/bzimage,/Users/jenkins/minikube-integr
ation/19651-1087/.minikube/machines/force-systemd-flag-218000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-218000"
	I0916 04:09:15.529625    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:09:15.532518    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 DEBUG: hyperkit: Pid is 5841
	I0916 04:09:15.532964    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 0
	I0916 04:09:15.532995    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:15.533077    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:15.534042    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:15.534137    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:15.534161    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:15.534204    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:15.534223    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:15.534242    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:15.534257    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:15.534272    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:15.534293    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:15.534313    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:15.534332    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:15.534356    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:15.534378    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:15.534404    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:15.534412    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:15.534431    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:15.534443    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:15.540067    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:09:15.548031    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-flag-218000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:09:15.548845    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:09:15.548860    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:09:15.548867    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:09:15.548873    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:09:15.922565    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:09:15.922578    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:09:16.037274    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:09:16.037293    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:09:16.037314    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:09:16.037323    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:09:16.038158    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:09:16.038167    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:09:17.534822    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 1
	I0916 04:09:17.534835    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:17.534889    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:17.535718    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:17.535742    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:17.535749    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:17.535763    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:17.535772    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:17.535796    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:17.535810    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:17.535822    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:17.535833    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:17.535843    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:17.535849    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:17.535863    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:17.535876    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:17.535885    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:17.535892    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:17.535899    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:17.535907    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:19.536576    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 2
	I0916 04:09:19.536590    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:19.536691    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:19.537509    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:19.537552    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:19.537560    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:19.537571    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:19.537577    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:19.537583    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:19.537592    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:19.537598    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:19.537603    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:19.537615    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:19.537623    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:19.537642    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:19.537654    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:19.537661    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:19.537670    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:19.537678    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:19.537702    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:21.419756    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:21 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 04:09:21.419911    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:21 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 04:09:21.419925    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:21 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 04:09:21.440035    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | 2024/09/16 04:09:21 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 04:09:21.538350    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 3
	I0916 04:09:21.538371    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:21.538532    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:21.539455    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:21.539516    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:21.539529    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:21.539540    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:21.539557    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:21.539571    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:21.539580    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:21.539590    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:21.539607    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:21.539619    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:21.539627    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:21.539635    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:21.539642    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:21.539650    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:21.539658    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:21.539666    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:21.539674    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:23.540195    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 4
	I0916 04:09:23.540211    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:23.540309    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:23.541094    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:23.541146    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:23.541157    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:23.541165    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:23.541176    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:23.541195    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:23.541207    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:23.541224    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:23.541237    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:23.541245    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:23.541253    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:23.541260    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:23.541267    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:23.541286    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:23.541302    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:23.541311    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:23.541320    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:25.543472    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 5
	I0916 04:09:25.543487    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:25.543531    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:25.544322    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:25.544356    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:25.544384    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:25.544398    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:25.544409    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:25.544415    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:25.544421    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:25.544431    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:25.544449    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:25.544460    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:25.544470    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:25.544479    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:25.544496    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:25.544508    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:25.544515    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:25.544523    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:25.544538    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:27.544684    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 6
	I0916 04:09:27.544700    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:27.544772    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:27.545604    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:27.545664    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:27.545674    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:27.545702    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:27.545715    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:27.545724    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:27.545731    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:27.545739    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:27.545753    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:27.545763    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:27.545770    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:27.545778    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:27.545785    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:27.545793    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:27.545799    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:27.545805    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:27.545813    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:29.547876    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 7
	I0916 04:09:29.547888    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:29.547919    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:29.548721    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:29.548757    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:29.548769    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:29.548779    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:29.548785    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:29.548791    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:29.548801    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:29.548807    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:29.548819    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:29.548826    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:29.548833    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:29.548840    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:29.548846    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:29.548857    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:29.548868    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:29.548879    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:29.548887    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:31.551058    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 8
	I0916 04:09:31.551077    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:31.551137    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:31.551965    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:31.552024    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:31.552034    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:31.552050    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:31.552061    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:31.552071    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:31.552076    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:31.552093    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:31.552104    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:31.552126    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:31.552137    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:31.552144    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:31.552155    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:31.552163    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:31.552169    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:31.552176    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:31.552184    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:33.552230    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 9
	I0916 04:09:33.552244    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:33.552296    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:33.553099    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:33.553163    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:33.553176    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:33.553191    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:33.553204    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:33.553214    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:33.553223    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:33.553230    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:33.553244    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:33.553252    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:33.553260    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:33.553266    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:33.553277    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:33.553285    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:33.553291    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:33.553304    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:33.553323    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:35.553524    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 10
	I0916 04:09:35.553551    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:35.553602    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:35.554569    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:35.554626    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:35.554637    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:35.554668    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:35.554677    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:35.554683    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:35.554696    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:35.554703    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:35.554711    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:35.554719    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:35.554730    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:35.554742    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:35.554750    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:35.554765    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:35.554774    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:35.554790    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:35.554803    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:37.555080    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 11
	I0916 04:09:37.555095    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:37.555150    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:37.555984    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:37.555992    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:37.556000    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:37.556006    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:37.556017    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:37.556026    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:37.556033    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:37.556048    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:37.556072    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:37.556085    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:37.556102    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:37.556115    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:37.556123    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:37.556131    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:37.556138    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:37.556151    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:37.556160    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:39.558160    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 12
	I0916 04:09:39.558172    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:39.558224    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:39.559024    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:39.559090    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:39.559101    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:39.559117    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:39.559127    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:39.559141    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:39.559150    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:39.559165    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:39.559177    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:39.559185    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:39.559194    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:39.559201    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:39.559209    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:39.559216    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:39.559227    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:39.559235    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:39.559244    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:41.561271    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 13
	I0916 04:09:41.561311    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:41.561344    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:41.562189    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:41.562204    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:41.562217    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:41.562231    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:41.562239    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:41.562245    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:41.562253    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:41.562265    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:41.562275    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:41.562283    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:41.562291    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:41.562298    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:41.562306    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:41.562312    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:41.562321    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:41.562329    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:41.562337    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:43.563266    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 14
	I0916 04:09:43.563281    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:43.563347    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:43.564132    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:43.564177    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:43.564189    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:43.564198    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:43.564208    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:43.564234    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:43.564248    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:43.564258    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:43.564267    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:43.564273    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:43.564280    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:43.564286    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:43.564297    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:43.564305    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:43.564317    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:43.564326    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:43.564350    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:45.564502    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 15
	I0916 04:09:45.564517    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:45.564599    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:45.565373    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:45.565429    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:45.565442    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:45.565456    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:45.565466    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:45.565477    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:45.565487    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:45.565495    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:45.565503    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:45.565514    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:45.565523    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:45.565529    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:45.565536    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:45.565551    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:45.565558    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:45.565580    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:45.565598    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:47.566542    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 16
	I0916 04:09:47.566557    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:47.566627    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:47.567437    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:47.567491    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:47.567502    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:47.567510    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:47.567516    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:47.567527    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:47.567535    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:47.567552    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:47.567566    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:47.567574    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:47.567579    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:47.567586    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:47.567592    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:47.567609    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:47.567624    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:47.567638    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:47.567647    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:49.569679    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 17
	I0916 04:09:49.569695    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:49.569744    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:49.570657    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:49.570706    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:49.570719    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:49.570730    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:49.570739    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:49.570746    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:49.570752    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:49.570758    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:49.570765    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:49.570779    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:49.570787    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:49.570792    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:49.570799    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:49.570805    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:49.570812    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:49.570820    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:49.570829    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:51.571550    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 18
	I0916 04:09:51.571563    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:51.571636    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:51.572417    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:51.572475    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:51.572489    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:51.572501    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:51.572513    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:51.572524    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:51.572535    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:51.572543    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:51.572557    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:51.572567    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:51.572578    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:51.572587    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:51.572595    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:51.572603    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:51.572608    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:51.572626    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:51.572638    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:53.573234    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 19
	I0916 04:09:53.573250    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:53.573297    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:53.574178    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:53.574221    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:53.574248    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:53.574278    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:53.574284    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:53.574290    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:53.574296    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:53.574308    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:53.574317    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:53.574322    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:53.574329    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:53.574335    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:53.574342    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:53.574350    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:53.574356    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:53.574362    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:53.574370    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:55.576486    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 20
	I0916 04:09:55.576498    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:55.576548    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:55.577420    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:55.577467    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:55.577481    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:55.577492    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:55.577503    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:55.577519    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:55.577526    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:55.577532    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:55.577538    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:55.577548    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:55.577555    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:55.577563    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:55.577576    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:55.577590    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:55.577600    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:55.577605    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:55.577616    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:57.579694    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 21
	I0916 04:09:57.579711    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:57.579758    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:57.580660    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:57.580668    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:57.580677    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:57.580689    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:57.580706    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:57.580716    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:57.580724    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:57.580739    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:57.580749    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:57.580759    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:57.580768    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:57.580776    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:57.580801    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:57.580814    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:57.580827    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:57.580835    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:57.580844    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:09:59.581643    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 22
	I0916 04:09:59.581658    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:09:59.581714    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:09:59.582581    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:09:59.582619    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:09:59.582628    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:09:59.582636    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:09:59.582650    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:09:59.582663    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:09:59.582672    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:09:59.582689    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:09:59.582702    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:09:59.582712    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:09:59.582718    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:09:59.582737    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:09:59.582751    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:09:59.582759    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:09:59.582766    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:09:59.582773    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:09:59.582781    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:01.583266    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 23
	I0916 04:10:01.583279    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:01.583320    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:10:01.584136    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:10:01.584149    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:01.584155    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:01.584161    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:01.584166    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:01.584185    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:01.584199    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:01.584217    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:01.584226    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:01.584233    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:01.584241    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:01.584247    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:01.584253    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:01.584259    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:01.584265    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:01.584273    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:01.584280    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:03.585519    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 24
	I0916 04:10:03.585535    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:03.585545    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:10:03.586373    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:10:03.586421    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:03.586429    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:03.586448    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:03.586460    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:03.586468    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:03.586477    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:03.586494    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:03.586505    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:03.586518    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:03.586529    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:03.586536    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:03.586544    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:03.586551    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:03.586559    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:03.586566    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:03.586592    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:05.588632    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 25
	I0916 04:10:05.588648    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:05.588707    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:10:05.589559    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:10:05.589608    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:05.589617    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:05.589625    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:05.589630    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:05.589636    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:05.589641    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:05.589647    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:05.589653    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:05.589660    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:05.589667    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:05.589687    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:05.589699    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:05.589707    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:05.589715    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:05.589730    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:05.589742    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:07.591661    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 26
	I0916 04:10:07.591674    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:07.591750    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:10:07.592752    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:10:07.592808    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:07.592821    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:07.592848    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:07.592870    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:07.592884    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:07.592893    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:07.592914    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:07.592937    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:07.592949    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:07.592957    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:07.592963    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:07.592982    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:07.592994    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:07.593002    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:07.593009    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:07.593018    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:09.593491    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 27
	I0916 04:10:09.593505    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:09.593571    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:10:09.594362    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:10:09.594391    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:09.594409    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:09.594415    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:09.594422    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:09.594431    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:09.594438    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:09.594447    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:09.594462    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:09.594476    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:09.594487    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:09.594495    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:09.594503    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:09.594508    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:09.594514    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:09.594519    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:09.594526    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:11.595971    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 28
	I0916 04:10:11.596376    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:11.596555    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:10:11.596911    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:10:11.596980    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:11.596990    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:11.596997    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:11.597007    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:11.597022    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:11.597031    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:11.597042    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:11.597054    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:11.597067    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:11.597076    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:11.597083    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:11.597093    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:11.597114    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:11.597142    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:11.597167    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:11.597185    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:13.597703    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Attempt 29
	I0916 04:10:13.597723    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:10:13.597770    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | hyperkit pid from json: 5841
	I0916 04:10:13.598584    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Searching for 9a:be:e0:20:db:f3 in /var/db/dhcpd_leases ...
	I0916 04:10:13.598634    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:10:13.598645    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:10:13.598655    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:10:13.598662    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:10:13.598670    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:10:13.598676    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:10:13.598693    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:10:13.598703    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:10:13.598715    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:10:13.598724    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:10:13.598765    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:10:13.598784    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:10:13.598794    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:10:13.598836    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:10:13.598863    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:10:13.598877    5638 main.go:141] libmachine: (force-systemd-flag-218000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:10:15.600820    5638 client.go:171] duration metric: took 1m0.826229539s to LocalClient.Create
	I0916 04:10:17.602482    5638 start.go:128] duration metric: took 1m2.859592421s to createHost
	I0916 04:10:17.602497    5638 start.go:83] releasing machines lock for "force-systemd-flag-218000", held for 1m2.859694678s
	W0916 04:10:17.602627    5638 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-218000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:be:e0:20:db:f3
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-218000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:be:e0:20:db:f3
	I0916 04:10:17.686896    5638 out.go:201] 
	W0916 04:10:17.708112    5638 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:be:e0:20:db:f3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:be:e0:20:db:f3
	W0916 04:10:17.708123    5638 out.go:270] * 
	* 
	W0916 04:10:17.708733    5638 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:10:17.787063    5638 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-218000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-218000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-218000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (178.645302ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-218000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-218000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-16 04:10:18.066885 -0700 PDT m=+3026.728747609
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-218000 -n force-systemd-flag-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-218000 -n force-systemd-flag-218000: exit status 7 (79.58183ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:10:18.144571    5917 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:10:18.144592    5917 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-218000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-218000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-218000: (5.257943937s)
--- FAIL: TestForceSystemdFlag (252.09s)

                                                
                                    
x
+
TestForceSystemdEnv (234.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-471000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0916 04:06:01.620636    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-471000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m49.317219896s)

                                                
                                                
-- stdout --
	* [force-systemd-env-471000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-471000" primary control-plane node in "force-systemd-env-471000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-471000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:03:19.617940    5511 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:03:19.618117    5511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:19.618123    5511 out.go:358] Setting ErrFile to fd 2...
	I0916 04:03:19.618126    5511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:03:19.618286    5511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 04:03:19.619753    5511 out.go:352] Setting JSON to false
	I0916 04:03:19.642040    5511 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3769,"bootTime":1726480830,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 04:03:19.642195    5511 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:03:19.664125    5511 out.go:177] * [force-systemd-env-471000] minikube v1.34.0 on Darwin 14.6.1
	I0916 04:03:19.705615    5511 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:03:19.705661    5511 notify.go:220] Checking for updates...
	I0916 04:03:19.747659    5511 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 04:03:19.768489    5511 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 04:03:19.789734    5511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:03:19.810637    5511 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:03:19.831500    5511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0916 04:03:19.853080    5511 config.go:182] Loaded profile config "offline-docker-384000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:03:19.853159    5511 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:03:19.881620    5511 out.go:177] * Using the hyperkit driver based on user configuration
	I0916 04:03:19.923613    5511 start.go:297] selected driver: hyperkit
	I0916 04:03:19.923624    5511 start.go:901] validating driver "hyperkit" against <nil>
	I0916 04:03:19.923632    5511 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:03:19.926461    5511 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:03:19.926579    5511 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 04:03:19.934860    5511 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 04:03:19.938768    5511 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:03:19.938786    5511 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 04:03:19.938815    5511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:03:19.939038    5511 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 04:03:19.939066    5511 cni.go:84] Creating CNI manager for ""
	I0916 04:03:19.939103    5511 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:03:19.939114    5511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:03:19.939168    5511 start.go:340] cluster config:
	{Name:force-systemd-env-471000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:03:19.939256    5511 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:03:19.981561    5511 out.go:177] * Starting "force-systemd-env-471000" primary control-plane node in "force-systemd-env-471000" cluster
	I0916 04:03:20.002635    5511 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:03:20.002658    5511 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 04:03:20.002670    5511 cache.go:56] Caching tarball of preloaded images
	I0916 04:03:20.002759    5511 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 04:03:20.002767    5511 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:03:20.002832    5511 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/force-systemd-env-471000/config.json ...
	I0916 04:03:20.002850    5511 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/force-systemd-env-471000/config.json: {Name:mk8a195ccedd95c40c21f0db632ed21d52ffdc90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:03:20.003192    5511 start.go:360] acquireMachinesLock for force-systemd-env-471000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:04:00.038556    5511 start.go:364] duration metric: took 40.03478766s to acquireMachinesLock for "force-systemd-env-471000"
	I0916 04:04:00.038604    5511 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:04:00.038651    5511 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:04:00.060343    5511 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:04:00.060485    5511 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:04:00.060525    5511 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:04:00.069066    5511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52680
	I0916 04:04:00.069437    5511 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:04:00.069827    5511 main.go:141] libmachine: Using API Version  1
	I0916 04:04:00.069837    5511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:04:00.070035    5511 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:04:00.070148    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .GetMachineName
	I0916 04:04:00.070251    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .DriverName
	I0916 04:04:00.070354    5511 start.go:159] libmachine.API.Create for "force-systemd-env-471000" (driver="hyperkit")
	I0916 04:04:00.070372    5511 client.go:168] LocalClient.Create starting
	I0916 04:04:00.070402    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:04:00.070456    5511 main.go:141] libmachine: Decoding PEM data...
	I0916 04:04:00.070470    5511 main.go:141] libmachine: Parsing certificate...
	I0916 04:04:00.070523    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:04:00.070563    5511 main.go:141] libmachine: Decoding PEM data...
	I0916 04:04:00.070570    5511 main.go:141] libmachine: Parsing certificate...
	I0916 04:04:00.070588    5511 main.go:141] libmachine: Running pre-create checks...
	I0916 04:04:00.070596    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .PreCreateCheck
	I0916 04:04:00.070675    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.070889    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .GetConfigRaw
	I0916 04:04:00.081149    5511 main.go:141] libmachine: Creating machine...
	I0916 04:04:00.081158    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .Create
	I0916 04:04:00.081256    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.081376    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:04:00.081247    5541 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:04:00.081431    5511 main.go:141] libmachine: (force-systemd-env-471000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:04:00.284887    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:04:00.284754    5541 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/id_rsa...
	I0916 04:04:00.323503    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:04:00.323433    5541 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/force-systemd-env-471000.rawdisk...
	I0916 04:04:00.323513    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Writing magic tar header
	I0916 04:04:00.323527    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Writing SSH key tar header
	I0916 04:04:00.323842    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:04:00.323805    5541 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000 ...
	I0916 04:04:00.813138    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.813154    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/hyperkit.pid
	I0916 04:04:00.813163    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Using UUID ec94a657-7f8e-4217-af18-5df498c1daad
	I0916 04:04:00.839589    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Generated MAC 56:15:50:e7:43:c
	I0916 04:04:00.839608    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-471000
	I0916 04:04:00.839640    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ec94a657-7f8e-4217-af18-5df498c1daad", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:04:00.839663    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ec94a657-7f8e-4217-af18-5df498c1daad", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:04:00.839703    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ec94a657-7f8e-4217-af18-5df498c1daad", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/force-systemd-env-471000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-sys
temd-env-471000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-471000"}
	I0916 04:04:00.839736    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ec94a657-7f8e-4217-af18-5df498c1daad -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/force-systemd-env-471000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/bzimage,/Users/jenkins/minikube-integration/19
651-1087/.minikube/machines/force-systemd-env-471000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-471000"
	I0916 04:04:00.839756    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:04:00.842635    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 DEBUG: hyperkit: Pid is 5544
	I0916 04:04:00.843081    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 0
	I0916 04:04:00.843100    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:00.843176    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:00.844191    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:00.844259    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:00.844282    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:00.844329    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:00.844354    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:00.844371    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:00.844389    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:00.844405    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:00.844419    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:00.844434    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:00.844452    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:00.844467    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:00.844483    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:00.844496    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:00.844510    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:00.844524    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:00.844553    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:00.850679    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:04:00.858604    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:04:00.859327    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:04:00.859348    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:04:00.859360    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:04:00.859373    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:04:01.236637    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:04:01.236653    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:04:01.351297    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:04:01.351322    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:04:01.351341    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:04:01.351350    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:04:01.352159    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:04:01.352175    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:04:02.844667    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 1
	I0916 04:04:02.844680    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:02.844791    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:02.845587    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:02.845633    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:02.845653    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:02.845664    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:02.845670    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:02.845676    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:02.845682    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:02.845689    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:02.845697    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:02.845716    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:02.845728    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:02.845736    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:02.845743    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:02.845750    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:02.845764    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:02.845771    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:02.845779    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:04.846373    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 2
	I0916 04:04:04.846386    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:04.846458    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:04.847269    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:04.847306    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:04.847324    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:04.847339    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:04.847351    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:04.847358    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:04.847365    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:04.847370    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:04.847388    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:04.847396    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:04.847402    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:04.847419    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:04.847433    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:04.847448    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:04.847457    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:04.847467    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:04.847475    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:06.731610    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:06 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 04:04:06.731766    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:06 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 04:04:06.731774    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:06 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 04:04:06.751330    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:04:06 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 04:04:06.848142    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 3
	I0916 04:04:06.848174    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:06.848381    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:06.849421    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:06.849512    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:06.849526    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:06.849539    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:06.849547    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:06.849567    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:06.849606    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:06.849619    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:06.849629    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:06.849642    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:06.849652    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:06.849663    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:06.849676    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:06.849686    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:06.849700    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:06.849710    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:06.849721    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:08.849871    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 4
	I0916 04:04:08.849893    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:08.849956    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:08.850761    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:08.850812    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:08.850822    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:08.850833    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:08.850840    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:08.850846    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:08.850854    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:08.850869    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:08.850875    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:08.850881    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:08.850887    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:08.850902    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:08.850914    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:08.850930    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:08.850942    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:08.850949    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:08.850957    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:10.852982    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 5
	I0916 04:04:10.852994    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:10.853028    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:10.853855    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:10.853863    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:10.853872    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:10.853877    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:10.853883    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:10.853888    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:10.853894    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:10.853899    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:10.853907    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:10.853913    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:10.853919    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:10.853925    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:10.853936    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:10.853949    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:10.853956    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:10.853962    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:10.853979    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:12.854247    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 6
	I0916 04:04:12.854263    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:12.854308    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:12.855128    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:12.855155    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:12.855172    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:12.855191    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:12.855198    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:12.855207    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:12.855214    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:12.855221    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:12.855229    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:12.855238    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:12.855244    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:12.855250    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:12.855263    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:12.855270    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:12.855277    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:12.855293    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:12.855309    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:14.855814    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 7
	I0916 04:04:14.855828    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:14.855889    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:14.856620    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:14.856669    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:14.856679    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:14.856689    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:14.856699    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:14.856706    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:14.856714    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:14.856721    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:14.856727    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:14.856733    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:14.856759    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:14.856771    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:14.856783    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:14.856789    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:14.856796    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:14.856801    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:14.856807    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:16.858832    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 8
	I0916 04:04:16.858847    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:16.858882    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:16.859663    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:16.859700    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:16.859721    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:16.859734    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:16.859745    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:16.859764    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:16.859774    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:16.859787    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:16.859798    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:16.859811    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:16.859819    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:16.859827    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:16.859838    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:16.859850    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:16.859868    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:16.859877    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:16.859887    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:18.861909    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 9
	I0916 04:04:18.861923    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:18.861991    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:18.862765    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:18.862817    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:18.862829    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:18.862849    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:18.862856    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:18.862863    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:18.862869    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:18.862876    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:18.862884    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:18.862904    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:18.862916    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:18.862926    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:18.862936    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:18.862944    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:18.862952    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:18.862963    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:18.862970    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:20.864561    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 10
	I0916 04:04:20.864578    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:20.864640    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:20.865394    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:20.865454    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:20.865466    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:20.865475    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:20.865517    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:20.865528    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:20.865537    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:20.865545    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:20.865552    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:20.865560    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:20.865566    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:20.865574    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:20.865591    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:20.865603    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:20.865619    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:20.865632    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:20.865642    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:22.865753    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 11
	I0916 04:04:22.865768    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:22.865842    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:22.866717    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:22.866762    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:22.866772    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:22.866803    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:22.866814    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:22.866840    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:22.866851    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:22.866859    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:22.866874    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:22.866888    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:22.866896    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:22.866908    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:22.866915    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:22.866923    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:22.866932    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:22.866939    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:22.866959    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:24.867903    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 12
	I0916 04:04:24.867918    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:24.867990    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:24.868764    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:24.868811    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:24.868821    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:24.868829    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:24.868835    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:24.868842    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:24.868850    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:24.868857    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:24.868865    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:24.868872    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:24.868879    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:24.868886    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:24.868898    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:24.868905    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:24.868913    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:24.868927    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:24.868935    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:26.871048    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 13
	I0916 04:04:26.871061    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:26.871121    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:26.871975    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:26.872025    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:26.872035    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:26.872061    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:26.872075    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:26.872084    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:26.872090    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:26.872096    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:26.872105    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:26.872111    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:26.872122    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:26.872129    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:26.872136    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:26.872143    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:26.872155    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:26.872165    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:26.872173    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:28.872318    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 14
	I0916 04:04:28.872331    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:28.872386    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:28.873167    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:28.873218    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:28.873225    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:28.873233    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:28.873238    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:28.873267    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:28.873278    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:28.873285    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:28.873294    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:28.873308    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:28.873314    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:28.873320    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:28.873327    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:28.873334    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:28.873341    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:28.873348    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:28.873357    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:30.875445    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 15
	I0916 04:04:30.875456    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:30.875514    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:30.876346    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:30.876382    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:30.876392    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:30.876401    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:30.876409    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:30.876416    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:30.876422    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:30.876428    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:30.876434    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:30.876443    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:30.876454    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:30.876463    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:30.876479    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:30.876492    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:30.876500    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:30.876505    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:30.876520    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:32.876787    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 16
	I0916 04:04:32.876800    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:32.876809    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:32.877634    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:32.877666    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:32.877682    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:32.877693    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:32.877705    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:32.877713    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:32.877719    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:32.877733    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:32.877747    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:32.877755    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:32.877763    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:32.877774    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:32.877782    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:32.877802    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:32.877814    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:32.877820    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:32.877829    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:34.878820    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 17
	I0916 04:04:34.878831    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:34.878934    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:34.879696    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:34.879742    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:34.879752    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:34.879767    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:34.879787    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:34.879802    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:34.879819    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:34.879827    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:34.879833    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:34.879854    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:34.879868    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:34.879878    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:34.879886    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:34.879893    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:34.879900    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:34.879906    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:34.879914    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:36.881965    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 18
	I0916 04:04:36.881978    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:36.882102    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:36.882872    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:36.882928    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:36.882938    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:36.882947    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:36.882954    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:36.882960    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:36.882965    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:36.882982    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:36.882992    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:36.883010    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:36.883022    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:36.883036    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:36.883046    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:36.883058    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:36.883068    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:36.883075    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:36.883082    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:38.885129    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 19
	I0916 04:04:38.885154    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:38.885216    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:38.886006    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:38.886051    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:38.886061    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:38.886082    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:38.886093    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:38.886107    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:38.886116    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:38.886122    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:38.886131    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:38.886159    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:38.886169    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:38.886182    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:38.886192    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:38.886197    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:38.886211    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:38.886224    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:38.886233    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:40.886605    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 20
	I0916 04:04:40.886621    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:40.886683    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:40.887472    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:40.887487    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:40.887502    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:40.887517    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:40.887543    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:40.887551    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:40.887560    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:40.887567    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:40.887574    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:40.887596    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:40.887610    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:40.887625    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:40.887637    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:40.887655    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:40.887664    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:40.887673    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:40.887681    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:42.888815    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 21
	I0916 04:04:42.888829    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:42.888897    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:42.889706    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:42.889745    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:42.889763    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:42.889772    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:42.889779    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:42.889784    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:42.889873    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:42.889922    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:42.889935    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:42.889943    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:42.889950    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:42.889955    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:42.889966    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:42.889979    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:42.889997    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:42.890009    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:42.890018    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:44.891994    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 22
	I0916 04:04:44.892007    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:44.892056    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:44.892978    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:44.893036    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:44.893048    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:44.893057    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:44.893063    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:44.893081    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:44.893093    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:44.893113    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:44.893123    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:44.893130    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:44.893139    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:44.893148    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:44.893155    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:44.893162    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:44.893168    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:44.893181    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:44.893192    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:46.893928    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 23
	I0916 04:04:46.893944    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:46.894006    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:46.894782    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:46.894832    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:46.894844    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:46.894863    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:46.894876    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:46.894884    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:46.894901    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:46.894907    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:46.894914    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:46.894920    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:46.894927    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:46.894935    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:46.894942    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:46.894947    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:46.894954    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:46.894963    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:46.894971    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:48.897078    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 24
	I0916 04:04:48.897090    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:48.897147    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:48.897973    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:48.898046    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:48.898081    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:48.898088    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:48.898095    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:48.898104    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:48.898112    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:48.898132    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:48.898144    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:48.898153    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:48.898158    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:48.898182    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:48.898194    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:48.898212    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:48.898220    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:48.898227    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:48.898234    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:50.899203    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 25
	I0916 04:04:50.899233    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:50.899294    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:50.900128    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:50.900175    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:50.900186    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:50.900194    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:50.900201    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:50.900207    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:50.900213    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:50.900219    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:50.900225    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:50.900231    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:50.900237    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:50.900245    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:50.900252    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:50.900269    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:50.900281    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:50.900289    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:50.900296    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:52.900887    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 26
	I0916 04:04:52.900903    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:52.900963    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:52.901745    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:52.901786    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:52.901797    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:52.901814    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:52.901829    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:52.901843    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:52.901853    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:52.901860    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:52.901881    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:52.901890    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:52.901896    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:52.901909    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:52.901918    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:52.901926    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:52.901935    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:52.901940    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:52.901958    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:54.903980    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 27
	I0916 04:04:54.903994    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:54.904051    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:54.904827    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:54.904878    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:54.904888    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:54.904897    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:54.904907    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:54.904927    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:54.904939    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:54.904956    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:54.904968    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:54.904995    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:54.905008    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:54.905015    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:54.905021    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:54.905027    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:54.905033    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:54.905039    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:54.905049    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:56.906323    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 28
	I0916 04:04:56.906340    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:56.906391    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:56.907225    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:56.907275    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:56.907287    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:56.907310    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:56.907319    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:56.907326    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:56.907336    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:56.907346    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:56.907353    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:56.907359    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:56.907367    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:56.907385    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:56.907394    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:56.907402    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:56.907408    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:56.907413    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:56.907418    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:04:58.907662    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 29
	I0916 04:04:58.907674    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:04:58.907730    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:04:58.908639    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 56:15:50:e7:43:c in /var/db/dhcpd_leases ...
	I0916 04:04:58.908648    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:04:58.908656    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:04:58.908662    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:04:58.908681    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:04:58.908691    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:04:58.908699    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:04:58.908708    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:04:58.908718    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:04:58.908726    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:04:58.908753    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:04:58.908762    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:04:58.908769    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:04:58.908777    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:04:58.908784    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:04:58.908792    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:04:58.908799    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:05:00.910911    5511 client.go:171] duration metric: took 1m0.83966931s to LocalClient.Create
	I0916 04:05:02.913065    5511 start.go:128] duration metric: took 1m2.87351562s to createHost
	I0916 04:05:02.913081    5511 start.go:83] releasing machines lock for "force-systemd-env-471000", held for 1m2.873626058s
	W0916 04:05:02.913113    5511 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 56:15:50:e7:43:c
	I0916 04:05:02.913413    5511 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:05:02.913437    5511 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:05:02.921909    5511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52682
	I0916 04:05:02.922261    5511 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:05:02.922614    5511 main.go:141] libmachine: Using API Version  1
	I0916 04:05:02.922628    5511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:05:02.922841    5511 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:05:02.923230    5511 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:05:02.923254    5511 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:05:02.931689    5511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52684
	I0916 04:05:02.932030    5511 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:05:02.932321    5511 main.go:141] libmachine: Using API Version  1
	I0916 04:05:02.932339    5511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:05:02.932554    5511 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:05:02.932662    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .GetState
	I0916 04:05:02.932734    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:02.932807    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:05:02.933764    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .DriverName
	I0916 04:05:02.976637    5511 out.go:177] * Deleting "force-systemd-env-471000" in hyperkit ...
	I0916 04:05:02.997780    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .Remove
	I0916 04:05:02.997951    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:02.997973    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:02.998041    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:05:02.998984    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:02.999063    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | waiting for graceful shutdown
	I0916 04:05:04.001195    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:04.001313    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:05:04.002298    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | waiting for graceful shutdown
	I0916 04:05:05.003114    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:05.003190    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:05:05.004710    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | waiting for graceful shutdown
	I0916 04:05:06.005057    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:06.005128    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:05:06.005704    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | waiting for graceful shutdown
	I0916 04:05:07.006313    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:07.006380    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:05:07.006955    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | waiting for graceful shutdown
	I0916 04:05:08.008332    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:05:08.008411    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5544
	I0916 04:05:08.009708    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | sending sigkill
	I0916 04:05:08.009721    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0916 04:05:08.021355    5511 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 56:15:50:e7:43:c
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 56:15:50:e7:43:c
	I0916 04:05:08.021372    5511 start.go:729] Will try again in 5 seconds ...
	I0916 04:05:08.033740    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:05:08 WARN : hyperkit: failed to read stdout: EOF
	I0916 04:05:08.033759    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:05:08 WARN : hyperkit: failed to read stderr: EOF
	I0916 04:05:13.022759    5511 start.go:360] acquireMachinesLock for force-systemd-env-471000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:06:05.722773    5511 start.go:364] duration metric: took 52.699238533s to acquireMachinesLock for "force-systemd-env-471000"
	I0916 04:06:05.722822    5511 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:06:05.722873    5511 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:06:05.788551    5511 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0916 04:06:05.788626    5511 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:06:05.788652    5511 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:06:05.797195    5511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52688
	I0916 04:06:05.797539    5511 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:06:05.797911    5511 main.go:141] libmachine: Using API Version  1
	I0916 04:06:05.797938    5511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:06:05.798142    5511 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:06:05.798250    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .GetMachineName
	I0916 04:06:05.798345    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .DriverName
	I0916 04:06:05.798469    5511 start.go:159] libmachine.API.Create for "force-systemd-env-471000" (driver="hyperkit")
	I0916 04:06:05.798486    5511 client.go:168] LocalClient.Create starting
	I0916 04:06:05.798514    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:06:05.798567    5511 main.go:141] libmachine: Decoding PEM data...
	I0916 04:06:05.798579    5511 main.go:141] libmachine: Parsing certificate...
	I0916 04:06:05.798620    5511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:06:05.798657    5511 main.go:141] libmachine: Decoding PEM data...
	I0916 04:06:05.798669    5511 main.go:141] libmachine: Parsing certificate...
	I0916 04:06:05.798682    5511 main.go:141] libmachine: Running pre-create checks...
	I0916 04:06:05.798687    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .PreCreateCheck
	I0916 04:06:05.798758    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:05.798793    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .GetConfigRaw
	I0916 04:06:05.809769    5511 main.go:141] libmachine: Creating machine...
	I0916 04:06:05.809785    5511 main.go:141] libmachine: (force-systemd-env-471000) Calling .Create
	I0916 04:06:05.809922    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:05.810045    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:06:05.809875    5623 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:06:05.810091    5511 main.go:141] libmachine: (force-systemd-env-471000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:06:06.114457    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:06:06.114398    5623 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/id_rsa...
	I0916 04:06:06.211706    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:06:06.211645    5623 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/force-systemd-env-471000.rawdisk...
	I0916 04:06:06.211719    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Writing magic tar header
	I0916 04:06:06.211735    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Writing SSH key tar header
	I0916 04:06:06.212094    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | I0916 04:06:06.212060    5623 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000 ...
	I0916 04:06:06.585634    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:06.585656    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/hyperkit.pid
	I0916 04:06:06.585714    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Using UUID 67d27f3d-ef00-4bff-90d4-a2e60ea6d450
	I0916 04:06:06.612016    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Generated MAC 62:3c:c8:40:97:6
	I0916 04:06:06.612032    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-471000
	I0916 04:06:06.612068    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"67d27f3d-ef00-4bff-90d4-a2e60ea6d450", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:06:06.612094    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"67d27f3d-ef00-4bff-90d4-a2e60ea6d450", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:06:06.612160    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "67d27f3d-ef00-4bff-90d4-a2e60ea6d450", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/force-systemd-env-471000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-sys
temd-env-471000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-471000"}
	I0916 04:06:06.612196    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 67d27f3d-ef00-4bff-90d4-a2e60ea6d450 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/force-systemd-env-471000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/bzimage,/Users/jenkins/minikube-integration/19
651-1087/.minikube/machines/force-systemd-env-471000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-471000"
	I0916 04:06:06.612206    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:06:06.615183    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 DEBUG: hyperkit: Pid is 5635
	I0916 04:06:06.615610    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 0
	I0916 04:06:06.615634    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:06.615729    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:06.616665    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:06.616731    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:06.616745    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:06.616774    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:06.616787    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:06.616801    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:06.616824    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:06.616846    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:06.616862    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:06.616878    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:06.616898    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:06.616912    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:06.616927    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:06.616941    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:06.616961    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:06.616984    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:06.617004    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:06.622943    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:06:06.630883    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/force-systemd-env-471000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:06:06.631642    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:06:06.631666    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:06:06.631680    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:06:06.631690    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:06:07.006919    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 04:06:07.006934    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 04:06:07.121754    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:06:07.121770    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:06:07.121789    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:06:07.121799    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 04:06:07.122614    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 04:06:07.122630    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 04:06:08.617058    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 1
	I0916 04:06:08.617075    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:08.617131    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:08.617946    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:08.618004    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:08.618016    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:08.618024    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:08.618033    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:08.618046    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:08.618053    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:08.618060    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:08.618066    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:08.618075    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:08.618080    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:08.618086    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:08.618092    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:08.618106    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:08.618117    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:08.618129    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:08.618137    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:10.620101    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 2
	I0916 04:06:10.620115    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:10.620237    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:10.621042    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:10.621107    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:10.621122    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:10.621144    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:10.621154    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:10.621168    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:10.621184    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:10.621193    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:10.621200    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:10.621206    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:10.621213    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:10.621222    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:10.621231    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:10.621237    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:10.621249    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:10.621258    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:10.621267    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:12.511531    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 04:06:12.511631    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 04:06:12.511641    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 04:06:12.531796    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | 2024/09/16 04:06:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 04:06:12.623466    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 3
	I0916 04:06:12.623494    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:12.623688    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:12.625192    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:12.625307    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:12.625321    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:12.625337    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:12.625352    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:12.625401    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:12.625416    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:12.625427    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:12.625439    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:12.625451    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:12.625462    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:12.625482    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:12.625499    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:12.625519    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:12.625534    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:12.625545    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:12.625556    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:14.626377    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 4
	I0916 04:06:14.626405    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:14.626484    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:14.627403    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:14.627453    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:14.627463    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:14.627472    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:14.627480    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:14.627496    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:14.627512    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:14.627523    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:14.627531    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:14.627538    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:14.627546    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:14.627554    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:14.627560    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:14.627576    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:14.627583    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:14.627589    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:14.627597    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:16.629598    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 5
	I0916 04:06:16.629614    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:16.629682    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:16.630452    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:16.630504    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:16.630514    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:16.630544    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:16.630556    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:16.630564    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:16.630572    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:16.630586    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:16.630598    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:16.630606    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:16.630613    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:16.630627    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:16.630640    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:16.630657    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:16.630666    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:16.630672    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:16.630683    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:18.631811    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 6
	I0916 04:06:18.631823    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:18.631875    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:18.632870    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:18.632883    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:18.632907    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:18.632925    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:18.632934    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:18.632944    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:18.632954    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:18.632963    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:18.632986    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:18.632999    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:18.633007    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:18.633014    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:18.633030    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:18.633045    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:18.633059    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:18.633069    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:18.633077    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:20.635126    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 7
	I0916 04:06:20.635138    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:20.635168    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:20.636237    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:20.636280    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:20.636298    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:20.636310    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:20.636325    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:20.636334    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:20.636341    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:20.636348    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:20.636355    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:20.636363    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:20.636369    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:20.636377    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:20.636384    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:20.636392    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:20.636400    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:20.636408    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:20.636416    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:22.638494    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 8
	I0916 04:06:22.638507    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:22.638552    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:22.639355    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:22.639388    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:22.639395    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:22.639404    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:22.639410    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:22.639418    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:22.639433    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:22.639440    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:22.639445    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:22.639452    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:22.639459    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:22.639467    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:22.639475    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:22.639485    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:22.639493    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:22.639508    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:22.639519    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:24.639978    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 9
	I0916 04:06:24.639994    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:24.640043    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:24.640851    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:24.640893    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:24.640905    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:24.640921    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:24.640929    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:24.640949    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:24.640977    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:24.641004    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:24.641015    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:24.641075    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:24.641093    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:24.641101    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:24.641109    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:24.641116    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:24.641124    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:24.641130    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:24.641136    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:26.641363    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 10
	I0916 04:06:26.641379    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:26.641428    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:26.642222    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:26.642265    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:26.642284    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:26.642296    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:26.642302    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:26.642309    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:26.642315    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:26.642321    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:26.642328    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:26.642334    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:26.642339    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:26.642345    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:26.642353    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:26.642358    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:26.642372    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:26.642382    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:26.642393    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:28.644194    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 11
	I0916 04:06:28.644227    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:28.644286    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:28.645130    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:28.645178    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:28.645190    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:28.645202    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:28.645212    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:28.645226    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:28.645238    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:28.645249    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:28.645262    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:28.645270    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:28.645277    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:28.645284    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:28.645291    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:28.645305    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:28.645315    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:28.645327    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:28.645335    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:30.646432    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 12
	I0916 04:06:30.646448    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:30.646498    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:30.647295    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:30.647335    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:30.647349    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:30.647364    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:30.647373    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:30.647381    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:30.647400    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:30.647409    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:30.647417    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:30.647433    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:30.647450    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:30.647464    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:30.647476    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:30.647484    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:30.647492    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:30.647500    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:30.647506    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:32.649415    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 13
	I0916 04:06:32.649445    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:32.649480    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:32.650280    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:32.650338    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:32.650348    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:32.650355    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:32.650369    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:32.650377    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:32.650385    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:32.650395    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:32.650403    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:32.650419    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:32.650438    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:32.650451    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:32.650459    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:32.650466    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:32.650473    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:32.650479    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:32.650484    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:34.652053    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 14
	I0916 04:06:34.652067    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:34.652138    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:34.653162    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:34.653201    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:34.653213    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:34.653235    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:34.653250    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:34.653258    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:34.653263    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:34.653270    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:34.653293    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:34.653309    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:34.653322    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:34.653338    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:34.653351    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:34.653359    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:34.653368    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:34.653381    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:34.653392    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:36.654103    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 15
	I0916 04:06:36.654116    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:36.654221    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:36.654968    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:36.655025    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:36.655037    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:36.655045    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:36.655054    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:36.655074    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:36.655082    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:36.655091    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:36.655099    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:36.655106    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:36.655113    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:36.655120    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:36.655128    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:36.655143    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:36.655155    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:36.655163    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:36.655170    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:38.655550    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 16
	I0916 04:06:38.655565    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:38.655583    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:38.656318    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:38.656359    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:38.656369    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:38.656393    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:38.656405    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:38.656432    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:38.656445    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:38.656452    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:38.656458    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:38.656468    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:38.656475    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:38.656484    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:38.656491    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:38.656504    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:38.656510    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:38.656517    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:38.656525    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:40.656925    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 17
	I0916 04:06:40.656940    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:40.656963    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:40.657744    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:40.657798    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:40.657809    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:40.657827    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:40.657845    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:40.657855    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:40.657861    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:40.657869    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:40.657876    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:40.657882    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:40.657887    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:40.657896    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:40.657903    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:40.657909    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:40.657916    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:40.657937    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:40.657949    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:42.658121    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 18
	I0916 04:06:42.658136    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:42.658183    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:42.658990    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:42.658998    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:42.659006    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:42.659012    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:42.659024    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:42.659032    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:42.659040    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:42.659048    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:42.659063    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:42.659075    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:42.659084    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:42.659091    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:42.659099    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:42.659106    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:42.659118    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:42.659132    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:42.659142    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:44.660120    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 19
	I0916 04:06:44.660132    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:44.660187    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:44.661024    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:44.661047    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:44.661061    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:44.661068    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:44.661076    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:44.661083    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:44.661091    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:44.661109    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:44.661119    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:44.661129    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:44.661135    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:44.661154    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:44.661166    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:44.661174    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:44.661182    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:44.661189    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:44.661196    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:46.662156    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 20
	I0916 04:06:46.662172    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:46.662230    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:46.663049    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:46.663098    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:46.663112    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:46.663124    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:46.663131    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:46.663140    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:46.663148    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:46.663161    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:46.663174    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:46.663196    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:46.663208    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:46.663215    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:46.663221    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:46.663232    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:46.663242    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:46.663249    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:46.663254    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:48.665292    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 21
	I0916 04:06:48.665305    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:48.665361    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:48.666168    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:48.666176    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:48.666184    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:48.666190    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:48.666199    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:48.666204    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:48.666228    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:48.666237    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:48.666253    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:48.666266    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:48.666281    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:48.666289    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:48.666297    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:48.666305    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:48.666311    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:48.666317    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:48.666332    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:50.667876    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 22
	I0916 04:06:50.667888    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:50.667943    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:50.668872    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:50.668919    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:50.668931    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:50.668940    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:50.668947    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:50.668953    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:50.668958    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:50.668973    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:50.668984    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:50.669002    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:50.669011    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:50.669017    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:50.669025    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:50.669045    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:50.669055    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:50.669071    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:50.669081    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:52.671166    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 23
	I0916 04:06:52.671188    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:52.671235    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:52.672158    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:52.672195    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:52.672202    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:52.672223    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:52.672244    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:52.672255    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:52.672262    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:52.672270    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:52.672277    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:52.672282    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:52.672288    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:52.672294    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:52.672300    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:52.672321    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:52.672329    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:52.672337    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:52.672345    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:54.673206    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 24
	I0916 04:06:54.673220    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:54.673284    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:54.674078    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:54.674118    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:54.674137    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:54.674153    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:54.674161    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:54.674183    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:54.674197    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:54.674204    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:54.674210    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:54.674216    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:54.674222    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:54.674228    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:54.674234    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:54.674240    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:54.674247    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:54.674253    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:54.674259    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:56.675709    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 25
	I0916 04:06:56.675722    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:56.675782    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:56.676873    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:56.676922    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:56.676933    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:56.676942    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:56.676951    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:56.676959    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:56.676969    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:56.676977    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:56.676984    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:56.676990    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:56.676999    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:56.677007    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:56.677013    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:56.677020    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:56.677026    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:56.677033    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:56.677051    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:06:58.679182    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 26
	I0916 04:06:58.679213    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:06:58.679254    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:06:58.680089    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:06:58.680097    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:06:58.680108    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:06:58.680119    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:06:58.680127    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:06:58.680161    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:06:58.680180    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:06:58.680192    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:06:58.680210    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:06:58.680219    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:06:58.680229    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:06:58.680237    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:06:58.680246    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:06:58.680253    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:06:58.680260    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:06:58.680268    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:06:58.680276    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:00.681168    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 27
	I0916 04:07:00.681179    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:00.681222    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:07:00.681977    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:07:00.682019    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:00.682027    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:00.682036    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:00.682043    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:00.682072    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:00.682085    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:00.682093    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:00.682101    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:00.682107    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:00.682112    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:00.682119    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:00.682126    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:00.682147    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:00.682159    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:00.682165    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:00.682171    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:02.682275    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 28
	I0916 04:07:02.682288    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:02.682341    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:07:02.683125    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:07:02.683184    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:02.683194    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:02.683201    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:02.683206    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:02.683214    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:02.683219    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:02.683225    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:02.683230    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:02.683246    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:02.683255    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:02.683270    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:02.683284    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:02.683291    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:02.683297    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:02.683311    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:02.683328    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:04.684278    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Attempt 29
	I0916 04:07:04.684292    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:07:04.684358    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | hyperkit pid from json: 5635
	I0916 04:07:04.685128    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Searching for 62:3c:c8:40:97:6 in /var/db/dhcpd_leases ...
	I0916 04:07:04.685186    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | Found 15 entries in /var/db/dhcpd_leases!
	I0916 04:07:04.685199    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:07:04.685206    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:07:04.685213    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:07:04.685218    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:07:04.685236    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:07:04.685252    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:07:04.685268    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:07:04.685276    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:07:04.685288    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:07:04.685298    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:07:04.685315    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:07:04.685336    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:07:04.685344    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:07:04.685351    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:07:04.685370    5511 main.go:141] libmachine: (force-systemd-env-471000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:07:06.685956    5511 client.go:171] duration metric: took 1m0.886604833s to LocalClient.Create
	I0916 04:07:08.686591    5511 start.go:128] duration metric: took 1m2.962827544s to createHost
	I0916 04:07:08.686621    5511 start.go:83] releasing machines lock for "force-systemd-env-471000", held for 1m2.96291681s
	W0916 04:07:08.686723    5511 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-471000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:3c:c8:40:97:6
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-471000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:3c:c8:40:97:6
	I0916 04:07:08.750245    5511 out.go:201] 
	W0916 04:07:08.771636    5511 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:3c:c8:40:97:6
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:3c:c8:40:97:6
	W0916 04:07:08.771647    5511 out.go:270] * 
	* 
	W0916 04:07:08.772317    5511 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 04:07:08.835289    5511 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-471000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-471000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-471000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (179.304766ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-471000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-471000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-16 04:07:09.120974 -0700 PDT m=+2837.785477685
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-471000 -n force-systemd-env-471000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-471000 -n force-systemd-env-471000: exit status 7 (80.046847ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:07:09.199151    5692 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:07:09.199172    5692 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-471000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-471000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-471000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-471000: (5.23660202s)
--- FAIL: TestForceSystemdEnv (234.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (76.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-632000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-632000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m16.819367149s)

                                                
                                                
-- stdout --
	* [ha-632000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "ha-632000" primary control-plane node in "ha-632000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:29:12.046225    2666 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:29:12.046485    2666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:29:12.046491    2666 out.go:358] Setting ErrFile to fd 2...
	I0916 03:29:12.046495    2666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:29:12.046679    2666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:29:12.048234    2666 out.go:352] Setting JSON to false
	I0916 03:29:12.073311    2666 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1723,"bootTime":1726480829,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:29:12.073426    2666 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:29:12.130518    2666 out.go:177] * [ha-632000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:29:12.172676    2666 notify.go:220] Checking for updates...
	I0916 03:29:12.195662    2666 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:29:12.253794    2666 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:29:12.323605    2666 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:29:12.345801    2666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:29:12.368518    2666 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:29:12.389477    2666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:29:12.410977    2666 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:29:12.441573    2666 out.go:177] * Using the hyperkit driver based on user configuration
	I0916 03:29:12.483497    2666 start.go:297] selected driver: hyperkit
	I0916 03:29:12.483520    2666 start.go:901] validating driver "hyperkit" against <nil>
	I0916 03:29:12.483534    2666 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:29:12.486935    2666 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:29:12.487059    2666 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:29:12.495248    2666 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:29:12.499063    2666 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:29:12.499082    2666 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:29:12.499113    2666 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:29:12.499389    2666 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:29:12.499429    2666 cni.go:84] Creating CNI manager for ""
	I0916 03:29:12.499471    2666 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 03:29:12.499482    2666 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 03:29:12.499538    2666 start.go:340] cluster config:
	{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 03:29:12.499628    2666 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:29:12.541408    2666 out.go:177] * Starting "ha-632000" primary control-plane node in "ha-632000" cluster
	I0916 03:29:12.562581    2666 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:29:12.562677    2666 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:29:12.562703    2666 cache.go:56] Caching tarball of preloaded images
	I0916 03:29:12.562942    2666 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 03:29:12.562997    2666 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:29:12.563577    2666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:29:12.563626    2666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json: {Name:mk7a134be19718f61009586ee81c03fd42650345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:29:12.564234    2666 start.go:360] acquireMachinesLock for ha-632000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:29:12.564342    2666 start.go:364] duration metric: took 86.674µs to acquireMachinesLock for "ha-632000"
	I0916 03:29:12.564389    2666 start.go:93] Provisioning new machine with config: &{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:29:12.564488    2666 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 03:29:12.585504    2666 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 03:29:12.585881    2666 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:29:12.585949    2666 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:29:12.596011    2666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50661
	I0916 03:29:12.596349    2666 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:29:12.596755    2666 main.go:141] libmachine: Using API Version  1
	I0916 03:29:12.596765    2666 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:29:12.597002    2666 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:29:12.597158    2666 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:29:12.597266    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:12.597358    2666 start.go:159] libmachine.API.Create for "ha-632000" (driver="hyperkit")
	I0916 03:29:12.597381    2666 client.go:168] LocalClient.Create starting
	I0916 03:29:12.597412    2666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 03:29:12.597465    2666 main.go:141] libmachine: Decoding PEM data...
	I0916 03:29:12.597482    2666 main.go:141] libmachine: Parsing certificate...
	I0916 03:29:12.597536    2666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 03:29:12.597574    2666 main.go:141] libmachine: Decoding PEM data...
	I0916 03:29:12.597609    2666 main.go:141] libmachine: Parsing certificate...
	I0916 03:29:12.597633    2666 main.go:141] libmachine: Running pre-create checks...
	I0916 03:29:12.597642    2666 main.go:141] libmachine: (ha-632000) Calling .PreCreateCheck
	I0916 03:29:12.597727    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:12.597887    2666 main.go:141] libmachine: (ha-632000) Calling .GetConfigRaw
	I0916 03:29:12.598345    2666 main.go:141] libmachine: Creating machine...
	I0916 03:29:12.598354    2666 main.go:141] libmachine: (ha-632000) Calling .Create
	I0916 03:29:12.598427    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:12.598545    2666 main.go:141] libmachine: (ha-632000) DBG | I0916 03:29:12.598422    2674 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:29:12.598604    2666 main.go:141] libmachine: (ha-632000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 03:29:12.780241    2666 main.go:141] libmachine: (ha-632000) DBG | I0916 03:29:12.780149    2674 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa...
	I0916 03:29:12.842485    2666 main.go:141] libmachine: (ha-632000) DBG | I0916 03:29:12.842411    2674 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk...
	I0916 03:29:12.842494    2666 main.go:141] libmachine: (ha-632000) DBG | Writing magic tar header
	I0916 03:29:12.842501    2666 main.go:141] libmachine: (ha-632000) DBG | Writing SSH key tar header
	I0916 03:29:12.843355    2666 main.go:141] libmachine: (ha-632000) DBG | I0916 03:29:12.843258    2674 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000 ...
	I0916 03:29:13.219439    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:13.219464    2666 main.go:141] libmachine: (ha-632000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid
	I0916 03:29:13.219496    2666 main.go:141] libmachine: (ha-632000) DBG | Using UUID 1510d94a-ca8f-47b3-af67-8767f9a495bd
	I0916 03:29:13.327569    2666 main.go:141] libmachine: (ha-632000) DBG | Generated MAC 1a:88:5b:ab:d5:de
	I0916 03:29:13.327588    2666 main.go:141] libmachine: (ha-632000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000
	I0916 03:29:13.327616    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:29:13.327644    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:29:13.327678    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1510d94a-ca8f-47b3-af67-8767f9a495bd", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"}
	I0916 03:29:13.327705    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1510d94a-ca8f-47b3-af67-8767f9a495bd -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"
	I0916 03:29:13.327727    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 03:29:13.330677    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 DEBUG: hyperkit: Pid is 2677
	I0916 03:29:13.331512    2666 main.go:141] libmachine: (ha-632000) DBG | Attempt 0
	I0916 03:29:13.331524    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:13.331591    2666 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:29:13.332554    2666 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:29:13.332614    2666 main.go:141] libmachine: (ha-632000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0916 03:29:13.332631    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:29:13.332654    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:29:13.332668    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:29:13.338187    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 03:29:13.389489    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 03:29:13.390100    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:29:13.390115    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:29:13.390124    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:29:13.390145    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:29:13.770494    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 03:29:13.770509    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 03:29:13.885220    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:29:13.885236    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:29:13.885264    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:29:13.885278    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:29:13.886072    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 03:29:13.886080    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 03:29:15.333029    2666 main.go:141] libmachine: (ha-632000) DBG | Attempt 1
	I0916 03:29:15.333044    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:15.333094    2666 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:29:15.333885    2666 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:29:15.333944    2666 main.go:141] libmachine: (ha-632000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0916 03:29:15.333964    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:29:15.333973    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:29:15.333979    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:29:17.334821    2666 main.go:141] libmachine: (ha-632000) DBG | Attempt 2
	I0916 03:29:17.334838    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:17.334936    2666 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:29:17.335713    2666 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:29:17.335760    2666 main.go:141] libmachine: (ha-632000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0916 03:29:17.335768    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:29:17.335775    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:29:17.335796    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:29:19.336019    2666 main.go:141] libmachine: (ha-632000) DBG | Attempt 3
	I0916 03:29:19.336034    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:19.336145    2666 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:29:19.336879    2666 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:29:19.336935    2666 main.go:141] libmachine: (ha-632000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0916 03:29:19.336947    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:29:19.336957    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:29:19.336965    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:29:19.465413    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:19 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 03:29:19.465486    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:19 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 03:29:19.465495    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:19 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 03:29:19.488902    2666 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:29:19 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 03:29:21.337026    2666 main.go:141] libmachine: (ha-632000) DBG | Attempt 4
	I0916 03:29:21.337041    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:21.337108    2666 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:29:21.337873    2666 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:29:21.337917    2666 main.go:141] libmachine: (ha-632000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I0916 03:29:21.337927    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:29:21.337948    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:29:21.337959    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:29:23.338595    2666 main.go:141] libmachine: (ha-632000) DBG | Attempt 5
	I0916 03:29:23.338610    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:23.338698    2666 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:29:23.339471    2666 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:29:23.339531    2666 main.go:141] libmachine: (ha-632000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:29:23.339544    2666 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95a02}
	I0916 03:29:23.339572    2666 main.go:141] libmachine: (ha-632000) DBG | Found match: 1a:88:5b:ab:d5:de
	I0916 03:29:23.339582    2666 main.go:141] libmachine: (ha-632000) DBG | IP: 192.169.0.5
	I0916 03:29:23.339616    2666 main.go:141] libmachine: (ha-632000) Calling .GetConfigRaw
	I0916 03:29:23.340210    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:23.340321    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:23.340412    2666 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 03:29:23.340424    2666 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:29:23.340501    2666 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:29:23.340569    2666 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:29:23.341317    2666 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 03:29:23.341326    2666 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 03:29:23.341330    2666 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 03:29:23.341334    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:23.341424    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:23.341526    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:23.341644    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:23.341736    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:23.341860    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:23.342054    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:23.342061    2666 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 03:29:24.411089    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:29:24.411101    2666 main.go:141] libmachine: Detecting the provisioner...
	I0916 03:29:24.411106    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:24.411255    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:24.411356    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.411457    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.411568    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:24.411719    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:24.411867    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:24.411874    2666 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 03:29:24.475055    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 03:29:24.475110    2666 main.go:141] libmachine: found compatible host: buildroot
	I0916 03:29:24.475116    2666 main.go:141] libmachine: Provisioning with buildroot...
	I0916 03:29:24.475122    2666 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:29:24.475261    2666 buildroot.go:166] provisioning hostname "ha-632000"
	I0916 03:29:24.475272    2666 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:29:24.475381    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:24.475479    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:24.475581    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.475681    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.475785    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:24.475933    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:24.476078    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:24.476085    2666 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-632000 && echo "ha-632000" | sudo tee /etc/hostname
	I0916 03:29:24.550114    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-632000
	
	I0916 03:29:24.550132    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:24.550261    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:24.550351    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.550449    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.550540    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:24.550685    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:24.550849    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:24.550861    2666 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:29:24.620313    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:29:24.620333    2666 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 03:29:24.620343    2666 buildroot.go:174] setting up certificates
	I0916 03:29:24.620353    2666 provision.go:84] configureAuth start
	I0916 03:29:24.620362    2666 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:29:24.620499    2666 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:29:24.620603    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:24.620696    2666 provision.go:143] copyHostCerts
	I0916 03:29:24.620731    2666 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:29:24.620792    2666 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem, removing ...
	I0916 03:29:24.620799    2666 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:29:24.620933    2666 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 03:29:24.621128    2666 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:29:24.621158    2666 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem, removing ...
	I0916 03:29:24.621163    2666 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:29:24.621240    2666 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 03:29:24.621378    2666 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:29:24.621420    2666 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem, removing ...
	I0916 03:29:24.621424    2666 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:29:24.621511    2666 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 03:29:24.621649    2666 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.ha-632000 san=[127.0.0.1 192.169.0.5 ha-632000 localhost minikube]
	I0916 03:29:24.824976    2666 provision.go:177] copyRemoteCerts
	I0916 03:29:24.825057    2666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:29:24.825089    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:24.825304    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:24.825433    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.825519    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:24.825637    2666 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:29:24.865400    2666 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 03:29:24.865467    2666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 03:29:24.885020    2666 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 03:29:24.885079    2666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 03:29:24.904663    2666 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 03:29:24.904718    2666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 03:29:24.924184    2666 provision.go:87] duration metric: took 303.822348ms to configureAuth
	I0916 03:29:24.924197    2666 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:29:24.924324    2666 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:29:24.924337    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:24.924472    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:24.924552    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:24.924640    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.924736    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.924825    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:24.924962    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:24.925084    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:24.925092    2666 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:29:24.989060    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:29:24.989072    2666 buildroot.go:70] root file system type: tmpfs
	I0916 03:29:24.989155    2666 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:29:24.989169    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:24.989310    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:24.989403    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.989513    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:24.989600    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:24.989739    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:24.989875    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:24.989919    2666 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:29:25.064719    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:29:25.064739    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:25.064875    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:25.064971    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:25.065076    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:25.065162    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:25.065297    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:25.065438    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:25.065454    2666 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:29:26.669124    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:29:26.669140    2666 main.go:141] libmachine: Checking connection to Docker...
	I0916 03:29:26.669147    2666 main.go:141] libmachine: (ha-632000) Calling .GetURL
	I0916 03:29:26.669278    2666 main.go:141] libmachine: Docker is up and running!
	I0916 03:29:26.669286    2666 main.go:141] libmachine: Reticulating splines...
	I0916 03:29:26.669295    2666 client.go:171] duration metric: took 14.072130621s to LocalClient.Create
	I0916 03:29:26.669306    2666 start.go:167] duration metric: took 14.072175464s to libmachine.API.Create "ha-632000"
	I0916 03:29:26.669315    2666 start.go:293] postStartSetup for "ha-632000" (driver="hyperkit")
	I0916 03:29:26.669322    2666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:29:26.669334    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:26.669484    2666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:29:26.669498    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:26.669582    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:26.669664    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:26.669754    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:26.669837    2666 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:29:26.712765    2666 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:29:26.716059    2666 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:29:26.716081    2666 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 03:29:26.716181    2666 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 03:29:26.716363    2666 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> 16492.pem in /etc/ssl/certs
	I0916 03:29:26.716370    2666 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /etc/ssl/certs/16492.pem
	I0916 03:29:26.716588    2666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 03:29:26.725822    2666 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:29:26.758145    2666 start.go:296] duration metric: took 88.822263ms for postStartSetup
	I0916 03:29:26.758176    2666 main.go:141] libmachine: (ha-632000) Calling .GetConfigRaw
	I0916 03:29:26.758765    2666 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:29:26.758919    2666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:29:26.759248    2666 start.go:128] duration metric: took 14.194974278s to createHost
	I0916 03:29:26.759262    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:26.759352    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:26.759436    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:26.759508    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:26.759596    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:26.759693    2666 main.go:141] libmachine: Using SSH client type: native
	I0916 03:29:26.759814    2666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x645b820] 0x645e500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:29:26.759821    2666 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:29:26.822601    2666 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482566.025457330
	
	I0916 03:29:26.822615    2666 fix.go:216] guest clock: 1726482566.025457330
	I0916 03:29:26.822620    2666 fix.go:229] Guest: 2024-09-16 03:29:26.02545733 -0700 PDT Remote: 2024-09-16 03:29:26.759256 -0700 PDT m=+14.749847958 (delta=-733.79867ms)
	I0916 03:29:26.822640    2666 fix.go:200] guest clock delta is within tolerance: -733.79867ms
	I0916 03:29:26.822644    2666 start.go:83] releasing machines lock for "ha-632000", held for 14.258520509s
	I0916 03:29:26.822661    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:26.822788    2666 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:29:26.822890    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:26.823237    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:26.823353    2666 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:29:26.823444    2666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:29:26.823474    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:26.823490    2666 ssh_runner.go:195] Run: cat /version.json
	I0916 03:29:26.823501    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:29:26.823559    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:26.823588    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:29:26.823662    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:26.823677    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:29:26.823756    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:26.823790    2666 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:29:26.823868    2666 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:29:26.823904    2666 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:29:26.856929    2666 ssh_runner.go:195] Run: systemctl --version
	I0916 03:29:26.905639    2666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:29:26.910698    2666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:29:26.910756    2666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:29:26.922736    2666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:29:26.922751    2666 start.go:495] detecting cgroup driver to use...
	I0916 03:29:26.922850    2666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:29:26.937698    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:29:26.946989    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:29:26.955500    2666 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:29:26.955554    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:29:26.963944    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:29:26.972329    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:29:26.980578    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:29:26.988764    2666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:29:26.997224    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:29:27.005558    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:29:27.014249    2666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:29:27.023788    2666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:29:27.031843    2666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:29:27.039880    2666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:29:27.137544    2666 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:29:27.156465    2666 start.go:495] detecting cgroup driver to use...
	I0916 03:29:27.156554    2666 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:29:27.179039    2666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:29:27.189896    2666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:29:27.208808    2666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:29:27.221006    2666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:29:27.232594    2666 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:29:27.256034    2666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:29:27.267383    2666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:29:27.282428    2666 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:29:27.285405    2666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:29:27.293403    2666 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:29:27.307989    2666 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:29:27.412150    2666 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:29:27.507683    2666 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:29:27.507759    2666 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:29:27.522787    2666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:29:27.624402    2666 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:30:28.641048    2666 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01759959s)
	I0916 03:30:28.641124    2666 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0916 03:30:28.675863    2666 out.go:201] 
	W0916 03:30:28.697686    2666 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 16 10:29:24 ha-632000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:29:24 ha-632000 dockerd[510]: time="2024-09-16T10:29:24.583672919Z" level=info msg="Starting up"
	Sep 16 10:29:24 ha-632000 dockerd[510]: time="2024-09-16T10:29:24.584133260Z" level=info msg="containerd not running, starting managed containerd"
	Sep 16 10:29:24 ha-632000 dockerd[510]: time="2024-09-16T10:29:24.584808978Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.604073316Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618643936Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618684062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618723949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618733694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618783112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618814684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618968513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619002880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619014135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619021484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619074745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619218065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620726141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620763115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620865881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620898751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620968712Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.621057743Z" level=info msg="metadata content store policy set" policy=shared
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626873651Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626920607Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626959965Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626977268Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626986765Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627051311Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627212858Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627308125Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627342684Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627354013Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627369955Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627382667Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627395126Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627406060Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627415751Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627424395Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627432147Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627439271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627454062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627463753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627471492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627479313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627488854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627496945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627504190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627517542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627535963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627547215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627554615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627562239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627569844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627579037Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627592600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627601931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627609261Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627655230Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627668430Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627675883Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627684008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627690510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627698058Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627706815Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627846300Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627902899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627928433Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627940055Z" level=info msg="containerd successfully booted in 0.024572s"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.653697419Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.664280175Z" level=info msg="Loading containers: start."
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.747362848Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.832622391Z" level=info msg="Loading containers: done."
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.840655005Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.840746737Z" level=info msg="Daemon has completed initialization"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.870010246Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.870152556Z" level=info msg="API listen on [::]:2376"
	Sep 16 10:29:25 ha-632000 systemd[1]: Started Docker Application Container Engine.
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.838391740Z" level=info msg="Processing signal 'terminated'"
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839037146Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 16 10:29:26 ha-632000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839369113Z" level=info msg="Daemon shutdown complete"
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839410336Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839448313Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 16 10:29:27 ha-632000 systemd[1]: docker.service: Deactivated successfully.
	Sep 16 10:29:27 ha-632000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 16 10:29:27 ha-632000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:29:27 ha-632000 dockerd[915]: time="2024-09-16T10:29:27.874744287Z" level=info msg="Starting up"
	Sep 16 10:30:28 ha-632000 dockerd[915]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 16 10:30:28 ha-632000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 16 10:30:28 ha-632000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 16 10:30:28 ha-632000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 16 10:29:24 ha-632000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:29:24 ha-632000 dockerd[510]: time="2024-09-16T10:29:24.583672919Z" level=info msg="Starting up"
	Sep 16 10:29:24 ha-632000 dockerd[510]: time="2024-09-16T10:29:24.584133260Z" level=info msg="containerd not running, starting managed containerd"
	Sep 16 10:29:24 ha-632000 dockerd[510]: time="2024-09-16T10:29:24.584808978Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.604073316Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618643936Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618684062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618723949Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618733694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618783112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618814684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.618968513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619002880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619014135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619021484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619074745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.619218065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620726141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620763115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620865881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620898751Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.620968712Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.621057743Z" level=info msg="metadata content store policy set" policy=shared
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626873651Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626920607Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626959965Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626977268Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.626986765Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627051311Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627212858Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627308125Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627342684Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627354013Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627369955Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627382667Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627395126Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627406060Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627415751Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627424395Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627432147Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627439271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627454062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627463753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627471492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627479313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627488854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627496945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627504190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627517542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627535963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627547215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627554615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627562239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627569844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627579037Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627592600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627601931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627609261Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627655230Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627668430Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627675883Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627684008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627690510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627698058Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627706815Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627846300Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627902899Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627928433Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 16 10:29:24 ha-632000 dockerd[517]: time="2024-09-16T10:29:24.627940055Z" level=info msg="containerd successfully booted in 0.024572s"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.653697419Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.664280175Z" level=info msg="Loading containers: start."
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.747362848Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.832622391Z" level=info msg="Loading containers: done."
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.840655005Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.840746737Z" level=info msg="Daemon has completed initialization"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.870010246Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 16 10:29:25 ha-632000 dockerd[510]: time="2024-09-16T10:29:25.870152556Z" level=info msg="API listen on [::]:2376"
	Sep 16 10:29:25 ha-632000 systemd[1]: Started Docker Application Container Engine.
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.838391740Z" level=info msg="Processing signal 'terminated'"
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839037146Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 16 10:29:26 ha-632000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839369113Z" level=info msg="Daemon shutdown complete"
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839410336Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 16 10:29:26 ha-632000 dockerd[510]: time="2024-09-16T10:29:26.839448313Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 16 10:29:27 ha-632000 systemd[1]: docker.service: Deactivated successfully.
	Sep 16 10:29:27 ha-632000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 16 10:29:27 ha-632000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 10:29:27 ha-632000 dockerd[915]: time="2024-09-16T10:29:27.874744287Z" level=info msg="Starting up"
	Sep 16 10:30:28 ha-632000 dockerd[915]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 16 10:30:28 ha-632000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 16 10:30:28 ha-632000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 16 10:30:28 ha-632000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0916 03:30:28.697759    2666 out.go:270] * 
	* 
	W0916 03:30:28.699065    2666 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:30:28.761570    2666 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 start -p ha-632000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (149.588982ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:30:28.957629    2998 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/StartCluster (76.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (81.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (102.507352ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-632000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- rollout status deployment/busybox: exit status 1 (102.638388ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.406972ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.10369ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.677062ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.741788ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.005464ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.451117ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.903684ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.885567ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.0945ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.712181ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.859022ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- exec  -- nslookup kubernetes.io: exit status 1 (100.936149ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- exec  -- nslookup kubernetes.default: exit status 1 (100.879953ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (101.174736ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (147.872314ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:50.707543    3068 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DeployApp (81.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-632000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.849014ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-632000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (147.217034ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:50.955851    3076 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-632000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-632000 -v=7 --alsologtostderr: exit status 103 (186.660721ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-632000 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-632000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:51.022170    3081 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:51.022537    3081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:51.022543    3081 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:51.022547    3081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:51.022735    3081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:51.023104    3081 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:51.023417    3081 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:51.023769    3081 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:51.023809    3081 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:51.032041    3081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50713
	I0916 03:31:51.032483    3081 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:51.032887    3081 main.go:141] libmachine: Using API Version  1
	I0916 03:31:51.032897    3081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:51.033107    3081 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:51.033226    3081 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:31:51.033328    3081 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:31:51.033387    3081 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:31:51.034323    3081 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:51.034570    3081 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:51.034591    3081 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:51.042876    3081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50715
	I0916 03:31:51.043246    3081 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:51.043560    3081 main.go:141] libmachine: Using API Version  1
	I0916 03:31:51.043579    3081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:51.043813    3081 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:51.043922    3081 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:31:51.044016    3081 api_server.go:166] Checking apiserver status ...
	I0916 03:31:51.044077    3081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:31:51.044101    3081 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:31:51.044185    3081 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:31:51.044271    3081 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:31:51.044357    3081 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:31:51.044445    3081 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	W0916 03:31:51.085576    3081 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:31:51.108698    3081 out.go:177] * The control-plane node ha-632000 apiserver is not running: (state=Stopped)
	I0916 03:31:51.130080    3081 out.go:177]   To start a cluster, run: "minikube start -p ha-632000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-632000 -v=7 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (149.984521ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:51.292799    3085 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-632000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-632000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.321349ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-632000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-632000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-632000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (147.90093ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:51.477643    3091 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-632000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\"
,\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000}
,\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-632000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\"
:60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (147.909972ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:51.788765    3101 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status --output json -v=7 --alsologtostderr: exit status 6 (154.180071ms)

                                                
                                                
-- stdout --
	{"Name":"ha-632000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:51.854762    3106 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:51.854939    3106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:51.854945    3106 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:51.854949    3106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:51.855120    3106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:51.855313    3106 out.go:352] Setting JSON to true
	I0916 03:31:51.855335    3106 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:51.855374    3106 notify.go:220] Checking for updates...
	I0916 03:31:51.855658    3106 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:51.855673    3106 status.go:255] checking status of ha-632000 ...
	I0916 03:31:51.856108    3106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:51.856148    3106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:51.864982    3106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50749
	I0916 03:31:51.865313    3106 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:51.865713    3106 main.go:141] libmachine: Using API Version  1
	I0916 03:31:51.865724    3106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:51.865934    3106 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:51.866043    3106 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:31:51.866128    3106 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:31:51.866193    3106 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:31:51.867134    3106 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:31:51.867152    3106 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:51.867418    3106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:51.867437    3106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:51.875879    3106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50751
	I0916 03:31:51.876252    3106 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:51.876595    3106 main.go:141] libmachine: Using API Version  1
	I0916 03:31:51.876609    3106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:51.876849    3106 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:51.883004    3106 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:31:51.883115    3106 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:51.883366    3106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:51.883396    3106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:51.891789    3106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50753
	I0916 03:31:51.892121    3106 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:51.892456    3106 main.go:141] libmachine: Using API Version  1
	I0916 03:31:51.892474    3106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:51.892686    3106 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:51.892790    3106 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:31:51.892924    3106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:31:51.892941    3106 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:31:51.893022    3106 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:31:51.893107    3106 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:31:51.893196    3106 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:31:51.893285    3106 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:31:51.927945    3106 ssh_runner.go:195] Run: systemctl --version
	I0916 03:31:51.932308    3106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:31:51.942913    3106 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:31:51.942934    3106 api_server.go:166] Checking apiserver status ...
	I0916 03:31:51.942974    3106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:31:51.952810    3106 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:31:51.952820    3106 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:31:51.952828    3106 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-632000 status --output json -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (148.829609ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:52.092079    3111 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 node stop m02 -v=7 --alsologtostderr: exit status 85 (152.119316ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:52.158446    3116 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:52.158857    3116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:52.158863    3116 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:52.158867    3116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:52.159046    3116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:52.159392    3116 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:52.159722    3116 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:52.181370    3116 out.go:201] 
	W0916 03:31:52.203109    3116 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0916 03:31:52.203133    3116 out.go:270] * 
	* 
	W0916 03:31:52.206482    3116 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:31:52.232769    3116 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-amd64 -p ha-632000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (147.511799ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:52.310429    3118 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:52.310605    3118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:52.310610    3118 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:52.310613    3118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:52.310779    3118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:52.310964    3118 out.go:352] Setting JSON to false
	I0916 03:31:52.310985    3118 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:52.311033    3118 notify.go:220] Checking for updates...
	I0916 03:31:52.311262    3118 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:52.311277    3118 status.go:255] checking status of ha-632000 ...
	I0916 03:31:52.311729    3118 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:52.311788    3118 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:52.320275    3118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50765
	I0916 03:31:52.320599    3118 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:52.320997    3118 main.go:141] libmachine: Using API Version  1
	I0916 03:31:52.321012    3118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:52.321207    3118 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:52.321313    3118 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:31:52.321389    3118 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:31:52.321457    3118 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:31:52.322421    3118 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:31:52.322440    3118 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:52.322701    3118 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:52.322719    3118 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:52.330963    3118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50767
	I0916 03:31:52.331330    3118 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:52.331664    3118 main.go:141] libmachine: Using API Version  1
	I0916 03:31:52.331676    3118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:52.331927    3118 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:52.332044    3118 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:31:52.332130    3118 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:52.332388    3118 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:52.332417    3118 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:52.340802    3118 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50769
	I0916 03:31:52.341109    3118 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:52.341417    3118 main.go:141] libmachine: Using API Version  1
	I0916 03:31:52.341428    3118 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:52.341643    3118 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:52.341755    3118 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:31:52.341892    3118 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:31:52.341911    3118 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:31:52.341979    3118 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:31:52.342061    3118 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:31:52.342143    3118 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:31:52.342228    3118 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:31:52.376630    3118 ssh_runner.go:195] Run: systemctl --version
	I0916 03:31:52.381034    3118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:31:52.392047    3118 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:31:52.392066    3118 api_server.go:166] Checking apiserver status ...
	I0916 03:31:52.392115    3118 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:31:52.401899    3118 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:31:52.401908    3118 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:31:52.401917    3118 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (149.743269ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:52.542109    3123 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-632000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersi
on\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterva
l\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (148.442325ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:31:52.855581    3133 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 node start m02 -v=7 --alsologtostderr: exit status 85 (145.101894ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:52.921733    3138 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:52.922124    3138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:52.922129    3138 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:52.922133    3138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:52.922310    3138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:52.922631    3138 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:52.922961    3138 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:52.944360    3138 out.go:201] 
	W0916 03:31:52.965658    3138 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0916 03:31:52.965682    3138 out.go:270] * 
	* 
	W0916 03:31:52.968995    3138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:31:52.989828    3138 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0916 03:31:52.921733    3138 out.go:345] Setting OutFile to fd 1 ...
I0916 03:31:52.922124    3138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:31:52.922129    3138 out.go:358] Setting ErrFile to fd 2...
I0916 03:31:52.922133    3138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:31:52.922310    3138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
I0916 03:31:52.922631    3138 mustload.go:65] Loading cluster: ha-632000
I0916 03:31:52.922961    3138 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:31:52.944360    3138 out.go:201] 
W0916 03:31:52.965658    3138 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0916 03:31:52.965682    3138 out.go:270] * 
* 
W0916 03:31:52.968995    3138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 03:31:52.989828    3138 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-amd64 -p ha-632000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (148.773761ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:53.066415    3140 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:53.066592    3140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:53.066597    3140 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:53.066601    3140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:53.066800    3140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:53.066973    3140 out.go:352] Setting JSON to false
	I0916 03:31:53.066995    3140 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:53.067039    3140 notify.go:220] Checking for updates...
	I0916 03:31:53.067284    3140 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:53.067298    3140 status.go:255] checking status of ha-632000 ...
	I0916 03:31:53.067742    3140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:53.067784    3140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:53.076551    3140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50795
	I0916 03:31:53.076856    3140 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:53.077229    3140 main.go:141] libmachine: Using API Version  1
	I0916 03:31:53.077240    3140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:53.077471    3140 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:53.077619    3140 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:31:53.077714    3140 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:31:53.077776    3140 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:31:53.078747    3140 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:31:53.078767    3140 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:53.079027    3140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:53.079062    3140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:53.087261    3140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50797
	I0916 03:31:53.087619    3140 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:53.087963    3140 main.go:141] libmachine: Using API Version  1
	I0916 03:31:53.087975    3140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:53.088223    3140 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:53.088330    3140 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:31:53.088425    3140 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:53.088688    3140 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:53.088716    3140 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:53.097257    3140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50799
	I0916 03:31:53.097567    3140 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:53.097902    3140 main.go:141] libmachine: Using API Version  1
	I0916 03:31:53.097921    3140 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:53.098115    3140 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:53.098222    3140 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:31:53.098370    3140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:31:53.098390    3140 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:31:53.098458    3140 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:31:53.098556    3140 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:31:53.098638    3140 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:31:53.098719    3140 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:31:53.134676    3140 ssh_runner.go:195] Run: systemctl --version
	I0916 03:31:53.139043    3140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:31:53.149832    3140 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:31:53.149853    3140 api_server.go:166] Checking apiserver status ...
	I0916 03:31:53.149901    3140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:31:53.159600    3140 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:31:53.159611    3140 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:31:53.159622    3140 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (152.448368ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:54.321612    3145 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:54.321788    3145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:54.321793    3145 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:54.321797    3145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:54.321972    3145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:54.322142    3145 out.go:352] Setting JSON to false
	I0916 03:31:54.322163    3145 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:54.322201    3145 notify.go:220] Checking for updates...
	I0916 03:31:54.322433    3145 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:54.322450    3145 status.go:255] checking status of ha-632000 ...
	I0916 03:31:54.322897    3145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:54.322937    3145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:54.331955    3145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50803
	I0916 03:31:54.332354    3145 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:54.332773    3145 main.go:141] libmachine: Using API Version  1
	I0916 03:31:54.332784    3145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:54.333076    3145 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:54.333194    3145 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:31:54.333285    3145 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:31:54.333348    3145 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:31:54.334331    3145 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:31:54.334351    3145 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:54.334591    3145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:54.334615    3145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:54.343309    3145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50805
	I0916 03:31:54.343807    3145 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:54.344215    3145 main.go:141] libmachine: Using API Version  1
	I0916 03:31:54.344238    3145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:54.344504    3145 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:54.344605    3145 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:31:54.344697    3145 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:54.344955    3145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:54.344998    3145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:54.353781    3145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50807
	I0916 03:31:54.354096    3145 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:54.354424    3145 main.go:141] libmachine: Using API Version  1
	I0916 03:31:54.354441    3145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:54.354651    3145 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:54.354757    3145 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:31:54.354891    3145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:31:54.354910    3145 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:31:54.354982    3145 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:31:54.355092    3145 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:31:54.355171    3145 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:31:54.355250    3145 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:31:54.390247    3145 ssh_runner.go:195] Run: systemctl --version
	I0916 03:31:54.394628    3145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:31:54.405202    3145 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:31:54.405223    3145 api_server.go:166] Checking apiserver status ...
	I0916 03:31:54.405263    3145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:31:54.415007    3145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:31:54.415016    3145 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:31:54.415024    3145 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (157.690319ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:55.775535    3150 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:55.775801    3150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:55.775807    3150 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:55.775810    3150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:55.775983    3150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:55.776156    3150 out.go:352] Setting JSON to false
	I0916 03:31:55.776178    3150 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:55.776218    3150 notify.go:220] Checking for updates...
	I0916 03:31:55.776503    3150 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:55.776515    3150 status.go:255] checking status of ha-632000 ...
	I0916 03:31:55.776918    3150 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:55.776966    3150 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:55.785721    3150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50811
	I0916 03:31:55.786067    3150 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:55.786477    3150 main.go:141] libmachine: Using API Version  1
	I0916 03:31:55.786492    3150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:55.786693    3150 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:55.786817    3150 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:31:55.786913    3150 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:31:55.787021    3150 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:31:55.787973    3150 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:31:55.787994    3150 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:55.788264    3150 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:55.788289    3150 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:55.796450    3150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50813
	I0916 03:31:55.796796    3150 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:55.797147    3150 main.go:141] libmachine: Using API Version  1
	I0916 03:31:55.797170    3150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:55.797399    3150 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:55.797526    3150 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:31:55.803742    3150 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:55.803997    3150 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:55.804020    3150 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:55.812324    3150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50815
	I0916 03:31:55.812632    3150 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:55.812956    3150 main.go:141] libmachine: Using API Version  1
	I0916 03:31:55.812966    3150 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:55.813169    3150 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:55.813279    3150 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:31:55.813428    3150 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:31:55.813450    3150 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:31:55.813536    3150 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:31:55.813611    3150 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:31:55.813696    3150 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:31:55.813772    3150 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:31:55.849510    3150 ssh_runner.go:195] Run: systemctl --version
	I0916 03:31:55.853954    3150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:31:55.864558    3150 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:31:55.864578    3150 api_server.go:166] Checking apiserver status ...
	I0916 03:31:55.864615    3150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:31:55.874243    3150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:31:55.874253    3150 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:31:55.874281    3150 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (157.893625ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:31:59.240968    3155 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:31:59.241211    3155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:59.241216    3155 out.go:358] Setting ErrFile to fd 2...
	I0916 03:31:59.241233    3155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:31:59.241405    3155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:31:59.241582    3155 out.go:352] Setting JSON to false
	I0916 03:31:59.241603    3155 mustload.go:65] Loading cluster: ha-632000
	I0916 03:31:59.241645    3155 notify.go:220] Checking for updates...
	I0916 03:31:59.241923    3155 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:31:59.241937    3155 status.go:255] checking status of ha-632000 ...
	I0916 03:31:59.242361    3155 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:59.242409    3155 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:59.251357    3155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50819
	I0916 03:31:59.251867    3155 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:59.252298    3155 main.go:141] libmachine: Using API Version  1
	I0916 03:31:59.252324    3155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:59.252672    3155 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:59.252855    3155 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:31:59.252938    3155 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:31:59.253001    3155 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:31:59.254002    3155 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:31:59.254021    3155 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:59.254271    3155 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:59.254290    3155 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:59.268425    3155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50821
	I0916 03:31:59.268814    3155 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:59.269200    3155 main.go:141] libmachine: Using API Version  1
	I0916 03:31:59.269233    3155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:59.269437    3155 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:59.269557    3155 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:31:59.269643    3155 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:31:59.269914    3155 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:31:59.269940    3155 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:31:59.278243    3155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50823
	I0916 03:31:59.278536    3155 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:31:59.278840    3155 main.go:141] libmachine: Using API Version  1
	I0916 03:31:59.278853    3155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:31:59.279042    3155 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:31:59.279146    3155 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:31:59.279284    3155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:31:59.279303    3155 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:31:59.279379    3155 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:31:59.279456    3155 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:31:59.279538    3155 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:31:59.279623    3155 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:31:59.316046    3155 ssh_runner.go:195] Run: systemctl --version
	I0916 03:31:59.320409    3155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:31:59.331015    3155 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:31:59.331038    3155 api_server.go:166] Checking apiserver status ...
	I0916 03:31:59.331084    3155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:31:59.340828    3155 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:31:59.340838    3155 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:31:59.340848    3155 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (162.829591ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:32:03.109693    3163 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:32:03.109870    3163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:03.109875    3163 out.go:358] Setting ErrFile to fd 2...
	I0916 03:32:03.109879    3163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:03.110057    3163 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:32:03.110236    3163 out.go:352] Setting JSON to false
	I0916 03:32:03.110263    3163 mustload.go:65] Loading cluster: ha-632000
	I0916 03:32:03.110302    3163 notify.go:220] Checking for updates...
	I0916 03:32:03.110587    3163 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:32:03.110601    3163 status.go:255] checking status of ha-632000 ...
	I0916 03:32:03.111020    3163 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:03.111072    3163 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:03.119917    3163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50827
	I0916 03:32:03.120411    3163 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:03.120980    3163 main.go:141] libmachine: Using API Version  1
	I0916 03:32:03.120991    3163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:03.121302    3163 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:03.121475    3163 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:32:03.121563    3163 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:03.121628    3163 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:32:03.122580    3163 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:32:03.122630    3163 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:03.122899    3163 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:03.122951    3163 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:03.131304    3163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50829
	I0916 03:32:03.131740    3163 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:03.139449    3163 main.go:141] libmachine: Using API Version  1
	I0916 03:32:03.139468    3163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:03.139745    3163 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:03.139968    3163 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:32:03.140140    3163 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:03.140431    3163 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:03.140466    3163 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:03.149478    3163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50831
	I0916 03:32:03.149866    3163 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:03.150251    3163 main.go:141] libmachine: Using API Version  1
	I0916 03:32:03.150265    3163 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:03.150505    3163 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:03.150623    3163 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:03.150757    3163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:32:03.150774    3163 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:32:03.150847    3163 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:32:03.150930    3163 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:03.151012    3163 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:32:03.151089    3163 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:32:03.187514    3163 ssh_runner.go:195] Run: systemctl --version
	I0916 03:32:03.191859    3163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:32:03.202585    3163 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:32:03.202605    3163 api_server.go:166] Checking apiserver status ...
	I0916 03:32:03.202647    3163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:32:03.212286    3163 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:32:03.212295    3163 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:32:03.212308    3163 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (160.095416ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:32:06.428399    3168 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:32:06.428584    3168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:06.428589    3168 out.go:358] Setting ErrFile to fd 2...
	I0916 03:32:06.428593    3168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:06.428759    3168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:32:06.428932    3168 out.go:352] Setting JSON to false
	I0916 03:32:06.428955    3168 mustload.go:65] Loading cluster: ha-632000
	I0916 03:32:06.428992    3168 notify.go:220] Checking for updates...
	I0916 03:32:06.429225    3168 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:32:06.429242    3168 status.go:255] checking status of ha-632000 ...
	I0916 03:32:06.429660    3168 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:06.429702    3168 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:06.438364    3168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50835
	I0916 03:32:06.438778    3168 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:06.439295    3168 main.go:141] libmachine: Using API Version  1
	I0916 03:32:06.439322    3168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:06.439623    3168 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:06.439797    3168 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:32:06.439896    3168 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:06.439962    3168 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:32:06.441014    3168 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:32:06.441035    3168 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:06.441295    3168 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:06.441341    3168 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:06.449831    3168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50837
	I0916 03:32:06.450290    3168 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:06.450739    3168 main.go:141] libmachine: Using API Version  1
	I0916 03:32:06.450746    3168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:06.451046    3168 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:06.457753    3168 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:32:06.457888    3168 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:06.458140    3168 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:06.458166    3168 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:06.466395    3168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50839
	I0916 03:32:06.466801    3168 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:06.467135    3168 main.go:141] libmachine: Using API Version  1
	I0916 03:32:06.467156    3168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:06.467381    3168 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:06.467500    3168 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:06.467670    3168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:32:06.467690    3168 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:32:06.467769    3168 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:32:06.467859    3168 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:06.467939    3168 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:32:06.468042    3168 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:32:06.504037    3168 ssh_runner.go:195] Run: systemctl --version
	I0916 03:32:06.508430    3168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:32:06.519181    3168 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:32:06.519204    3168 api_server.go:166] Checking apiserver status ...
	I0916 03:32:06.519270    3168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:32:06.529102    3168 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:32:06.529111    3168 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:32:06.529129    3168 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (154.579916ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:32:16.207832    3173 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:32:16.208491    3173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:16.208500    3173 out.go:358] Setting ErrFile to fd 2...
	I0916 03:32:16.208506    3173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:16.208983    3173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:32:16.209258    3173 out.go:352] Setting JSON to false
	I0916 03:32:16.209283    3173 mustload.go:65] Loading cluster: ha-632000
	I0916 03:32:16.209311    3173 notify.go:220] Checking for updates...
	I0916 03:32:16.209610    3173 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:32:16.209625    3173 status.go:255] checking status of ha-632000 ...
	I0916 03:32:16.210132    3173 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:16.210169    3173 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:16.219226    3173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50843
	I0916 03:32:16.219654    3173 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:16.220068    3173 main.go:141] libmachine: Using API Version  1
	I0916 03:32:16.220077    3173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:16.220359    3173 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:16.220486    3173 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:32:16.220572    3173 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:16.220640    3173 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:32:16.221570    3173 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:32:16.221589    3173 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:16.221877    3173 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:16.221923    3173 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:16.230520    3173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50845
	I0916 03:32:16.230954    3173 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:16.231423    3173 main.go:141] libmachine: Using API Version  1
	I0916 03:32:16.231445    3173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:16.231695    3173 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:16.231856    3173 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:32:16.231982    3173 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:16.232260    3173 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:16.232286    3173 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:16.240736    3173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50847
	I0916 03:32:16.241051    3173 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:16.241380    3173 main.go:141] libmachine: Using API Version  1
	I0916 03:32:16.241393    3173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:16.241596    3173 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:16.241691    3173 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:16.241829    3173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:32:16.241849    3173 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:32:16.241930    3173 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:32:16.242002    3173 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:16.242076    3173 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:32:16.242155    3173 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:32:16.278430    3173 ssh_runner.go:195] Run: systemctl --version
	I0916 03:32:16.282654    3173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:32:16.293651    3173 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:32:16.293670    3173 api_server.go:166] Checking apiserver status ...
	I0916 03:32:16.293720    3173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:32:16.303350    3173 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:32:16.303360    3173 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:32:16.303371    3173 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (150.693363ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:32:23.172228    3181 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:32:23.172420    3181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:23.172425    3181 out.go:358] Setting ErrFile to fd 2...
	I0916 03:32:23.172429    3181 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:23.172594    3181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:32:23.172776    3181 out.go:352] Setting JSON to false
	I0916 03:32:23.172799    3181 mustload.go:65] Loading cluster: ha-632000
	I0916 03:32:23.172836    3181 notify.go:220] Checking for updates...
	I0916 03:32:23.173097    3181 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:32:23.173111    3181 status.go:255] checking status of ha-632000 ...
	I0916 03:32:23.173523    3181 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:23.173578    3181 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:23.182328    3181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50851
	I0916 03:32:23.182656    3181 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:23.183048    3181 main.go:141] libmachine: Using API Version  1
	I0916 03:32:23.183056    3181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:23.183254    3181 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:23.183366    3181 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:32:23.183465    3181 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:23.183525    3181 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:32:23.184451    3181 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:32:23.184471    3181 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:23.184709    3181 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:23.184728    3181 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:23.193029    3181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50853
	I0916 03:32:23.193342    3181 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:23.193708    3181 main.go:141] libmachine: Using API Version  1
	I0916 03:32:23.193729    3181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:23.193997    3181 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:23.194115    3181 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:32:23.194205    3181 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:23.194469    3181 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:23.194490    3181 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:23.202859    3181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50855
	I0916 03:32:23.203169    3181 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:23.203512    3181 main.go:141] libmachine: Using API Version  1
	I0916 03:32:23.203530    3181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:23.203724    3181 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:23.203817    3181 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:23.203951    3181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:32:23.203973    3181 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:32:23.204041    3181 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:32:23.204139    3181 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:23.204226    3181 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:32:23.204299    3181 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:32:23.240053    3181 ssh_runner.go:195] Run: systemctl --version
	I0916 03:32:23.244304    3181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:32:23.254833    3181 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:32:23.254853    3181 api_server.go:166] Checking apiserver status ...
	I0916 03:32:23.254896    3181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:32:23.264762    3181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:32:23.264771    3181 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:32:23.264781    3181 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 6 (155.459355ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:32:46.430045    3189 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:32:46.430225    3189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:46.430231    3189 out.go:358] Setting ErrFile to fd 2...
	I0916 03:32:46.430234    3189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:46.430427    3189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:32:46.430605    3189 out.go:352] Setting JSON to false
	I0916 03:32:46.430633    3189 mustload.go:65] Loading cluster: ha-632000
	I0916 03:32:46.430672    3189 notify.go:220] Checking for updates...
	I0916 03:32:46.430966    3189 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:32:46.430978    3189 status.go:255] checking status of ha-632000 ...
	I0916 03:32:46.431385    3189 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:46.431435    3189 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:46.440533    3189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50859
	I0916 03:32:46.440993    3189 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:46.441626    3189 main.go:141] libmachine: Using API Version  1
	I0916 03:32:46.441635    3189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:46.441953    3189 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:46.442101    3189 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:32:46.442180    3189 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:46.442287    3189 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:32:46.443186    3189 status.go:330] ha-632000 host status = "Running" (err=<nil>)
	I0916 03:32:46.443207    3189 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:46.443458    3189 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:46.443484    3189 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:46.451834    3189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50861
	I0916 03:32:46.452430    3189 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:46.452901    3189 main.go:141] libmachine: Using API Version  1
	I0916 03:32:46.452932    3189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:46.453268    3189 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:46.453513    3189 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:32:46.453642    3189 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:32:46.453918    3189 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:46.453948    3189 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:46.462146    3189 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50863
	I0916 03:32:46.462456    3189 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:46.462792    3189 main.go:141] libmachine: Using API Version  1
	I0916 03:32:46.462807    3189 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:46.463001    3189 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:46.463098    3189 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:46.463240    3189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:32:46.463259    3189 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:32:46.463340    3189 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:32:46.463421    3189 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:46.463503    3189 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:32:46.463588    3189 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:32:46.499813    3189 ssh_runner.go:195] Run: systemctl --version
	I0916 03:32:46.504091    3189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0916 03:32:46.515966    3189 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:32:46.515986    3189 api_server.go:166] Checking apiserver status ...
	I0916 03:32:46.516032    3189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0916 03:32:46.526566    3189 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:32:46.526577    3189 status.go:422] ha-632000 apiserver status = Stopped (err=<nil>)
	I0916 03:32:46.526584    3189 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (148.865423ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:32:46.665406    3194 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-632000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\"
,\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000}
,\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-632000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\"
:60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (149.614393ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:32:46.979251    3204 status.go:417] kubeconfig endpoint: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (3.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 node delete m03 -v=7 --alsologtostderr: exit status 80 (250.373958ms)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster ha-632000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:33:56.396790    3246 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:33:56.397088    3246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:33:56.397094    3246 out.go:358] Setting ErrFile to fd 2...
	I0916 03:33:56.397098    3246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:33:56.397291    3246 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:33:56.397655    3246 mustload.go:65] Loading cluster: ha-632000
	I0916 03:33:56.397972    3246 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:33:56.398379    3246 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:56.398413    3246 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:56.406834    3246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50930
	I0916 03:33:56.407253    3246 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:56.407640    3246 main.go:141] libmachine: Using API Version  1
	I0916 03:33:56.407666    3246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:56.407880    3246 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:56.407997    3246 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:33:56.408095    3246 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:33:56.408163    3246 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:33:56.409126    3246 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:33:56.409372    3246 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:56.409393    3246 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:56.417713    3246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50932
	I0916 03:33:56.418062    3246 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:56.418424    3246 main.go:141] libmachine: Using API Version  1
	I0916 03:33:56.418444    3246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:56.418667    3246 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:56.418782    3246 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:56.418879    3246 api_server.go:166] Checking apiserver status ...
	I0916 03:33:56.418938    3246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:33:56.418957    3246 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:56.419035    3246 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:56.419136    3246 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:56.419214    3246 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:56.419298    3246 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:56.464113    3246 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1984/cgroup
	W0916 03:33:56.472104    3246 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1984/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:33:56.472164    3246 ssh_runner.go:195] Run: ls
	I0916 03:33:56.475188    3246 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:33:56.478378    3246 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0916 03:33:56.499939    3246 out.go:177] * Deleting node m03 from cluster ha-632000
	I0916 03:33:56.523196    3246 out.go:201] 
	W0916 03:33:56.544771    3246 out.go:270] X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve node: Could not find node m03
	X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve node: Could not find node m03
	W0916 03:33:56.544796    3246 out.go:270] * 
	* 
	W0916 03:33:56.548111    3246 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 03:33:56.569814    3246 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-632000 node delete m03 -v=7 --alsologtostderr": exit status 80
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:498: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:501: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:504: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:507: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:524: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-632000 logs -n 25: (2.200049562s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-632000 -- apply -f             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- rollout status       | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | deployment/busybox                   |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  -- nslookup    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-632000 -v=7                | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node stop m02 -v=7         | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node start m02 -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000 -v=7               | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-632000 -v=7                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:32 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-632000 --wait=true -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:33 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	| node    | ha-632000 node delete m03 -v=7       | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:32:55
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:32:55.479806    3215 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:32:55.479979    3215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:55.479984    3215 out.go:358] Setting ErrFile to fd 2...
	I0916 03:32:55.479988    3215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:55.480154    3215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:32:55.481720    3215 out.go:352] Setting JSON to false
	I0916 03:32:55.503766    3215 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1946,"bootTime":1726480829,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:32:55.503923    3215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:32:55.525873    3215 out.go:177] * [ha-632000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:32:55.569056    3215 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:32:55.569122    3215 notify.go:220] Checking for updates...
	I0916 03:32:55.611676    3215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:32:55.632614    3215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:32:55.653732    3215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:32:55.674798    3215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:32:55.695893    3215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:32:55.717487    3215 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:32:55.717654    3215 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:32:55.718427    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:55.718486    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:55.727827    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50892
	I0916 03:32:55.728193    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:55.728608    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:32:55.728620    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:55.728861    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:55.728985    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:55.757983    3215 out.go:177] * Using the hyperkit driver based on existing profile
	I0916 03:32:55.799698    3215 start.go:297] selected driver: hyperkit
	I0916 03:32:55.799725    3215 start.go:901] validating driver "hyperkit" against &{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:32:55.799929    3215 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:32:55.800120    3215 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:32:55.800326    3215 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:32:55.810037    3215 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:32:55.813846    3215 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:55.813866    3215 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:32:55.816398    3215 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:32:55.816433    3215 cni.go:84] Creating CNI manager for ""
	I0916 03:32:55.816474    3215 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:32:55.816535    3215 start.go:340] cluster config:
	{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:32:55.816615    3215 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:32:55.858742    3215 out.go:177] * Starting "ha-632000" primary control-plane node in "ha-632000" cluster
	I0916 03:32:55.879767    3215 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:32:55.879887    3215 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:32:55.879907    3215 cache.go:56] Caching tarball of preloaded images
	I0916 03:32:55.880125    3215 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 03:32:55.880144    3215 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:32:55.880309    3215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:32:55.881254    3215 start.go:360] acquireMachinesLock for ha-632000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:32:55.881410    3215 start.go:364] duration metric: took 129.858µs to acquireMachinesLock for "ha-632000"
	I0916 03:32:55.881446    3215 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:32:55.881464    3215 fix.go:54] fixHost starting: 
	I0916 03:32:55.881899    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:55.881941    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:55.891224    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50894
	I0916 03:32:55.891581    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:55.891955    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:32:55.891969    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:55.892169    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:55.892298    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:55.892402    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:32:55.892496    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:55.892579    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:32:55.893499    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 2677 missing from process table
	I0916 03:32:55.893531    3215 fix.go:112] recreateIfNeeded on ha-632000: state=Stopped err=<nil>
	I0916 03:32:55.893549    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	W0916 03:32:55.893640    3215 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:32:55.914812    3215 out.go:177] * Restarting existing hyperkit VM for "ha-632000" ...
	I0916 03:32:55.956786    3215 main.go:141] libmachine: (ha-632000) Calling .Start
	I0916 03:32:55.957059    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:55.957099    3215 main.go:141] libmachine: (ha-632000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid
	I0916 03:32:55.958906    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 2677 missing from process table
	I0916 03:32:55.958920    3215 main.go:141] libmachine: (ha-632000) DBG | pid 2677 is in state "Stopped"
	I0916 03:32:55.958942    3215 main.go:141] libmachine: (ha-632000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid...
	I0916 03:32:55.959251    3215 main.go:141] libmachine: (ha-632000) DBG | Using UUID 1510d94a-ca8f-47b3-af67-8767f9a495bd
	I0916 03:32:56.070187    3215 main.go:141] libmachine: (ha-632000) DBG | Generated MAC 1a:88:5b:ab:d5:de
	I0916 03:32:56.070219    3215 main.go:141] libmachine: (ha-632000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000
	I0916 03:32:56.070325    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:32:56.070364    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:32:56.070401    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1510d94a-ca8f-47b3-af67-8767f9a495bd", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"}
	I0916 03:32:56.070433    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1510d94a-ca8f-47b3-af67-8767f9a495bd -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"
	I0916 03:32:56.070442    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 03:32:56.071925    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Pid is 3227
	I0916 03:32:56.072259    3215 main.go:141] libmachine: (ha-632000) DBG | Attempt 0
	I0916 03:32:56.072278    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:56.072356    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:32:56.074045    3215 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:32:56.074108    3215 main.go:141] libmachine: (ha-632000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:32:56.074131    3215 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95a02}
	I0916 03:32:56.074143    3215 main.go:141] libmachine: (ha-632000) DBG | Found match: 1a:88:5b:ab:d5:de
	I0916 03:32:56.074178    3215 main.go:141] libmachine: (ha-632000) DBG | IP: 192.169.0.5
	I0916 03:32:56.074212    3215 main.go:141] libmachine: (ha-632000) Calling .GetConfigRaw
	I0916 03:32:56.074819    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:32:56.075006    3215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:32:56.075469    3215 machine.go:93] provisionDockerMachine start ...
	I0916 03:32:56.075480    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:56.075609    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:32:56.075727    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:32:56.075849    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:56.075988    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:56.076094    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:32:56.076241    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:32:56.076440    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:32:56.076449    3215 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 03:32:56.079763    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 03:32:56.132093    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 03:32:56.132743    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:32:56.132759    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:32:56.132779    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:32:56.132793    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:32:56.511645    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 03:32:56.511711    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 03:32:56.626770    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:32:56.626798    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:32:56.626807    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:32:56.626814    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:32:56.627640    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 03:32:56.627650    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 03:33:02.223309    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 03:33:02.223373    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 03:33:02.223381    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 03:33:02.247110    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 03:33:07.150140    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 03:33:07.150156    3215 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:33:07.150328    3215 buildroot.go:166] provisioning hostname "ha-632000"
	I0916 03:33:07.150340    3215 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:33:07.150432    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.150541    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.150645    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.150761    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.150866    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.151049    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.151232    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.151240    3215 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-632000 && echo "ha-632000" | sudo tee /etc/hostname
	I0916 03:33:07.226819    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-632000
	
	I0916 03:33:07.226837    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.226964    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.227086    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.227191    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.227294    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.227437    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.227577    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.227587    3215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:33:07.299987    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:33:07.300009    3215 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 03:33:07.300033    3215 buildroot.go:174] setting up certificates
	I0916 03:33:07.300040    3215 provision.go:84] configureAuth start
	I0916 03:33:07.300048    3215 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:33:07.300187    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:33:07.300306    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.300398    3215 provision.go:143] copyHostCerts
	I0916 03:33:07.300432    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:33:07.300507    3215 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem, removing ...
	I0916 03:33:07.300515    3215 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:33:07.300651    3215 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 03:33:07.300860    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:33:07.300900    3215 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem, removing ...
	I0916 03:33:07.300905    3215 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:33:07.300984    3215 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 03:33:07.301151    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:33:07.301190    3215 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem, removing ...
	I0916 03:33:07.301195    3215 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:33:07.301303    3215 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 03:33:07.301475    3215 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.ha-632000 san=[127.0.0.1 192.169.0.5 ha-632000 localhost minikube]
	I0916 03:33:07.406966    3215 provision.go:177] copyRemoteCerts
	I0916 03:33:07.407026    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:33:07.407042    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.407190    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.407288    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.407393    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.407475    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:07.447794    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 03:33:07.447868    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 03:33:07.466620    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 03:33:07.466680    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 03:33:07.485470    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 03:33:07.485532    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 03:33:07.504299    3215 provision.go:87] duration metric: took 204.246783ms to configureAuth
	I0916 03:33:07.504314    3215 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:33:07.504463    3215 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:33:07.504476    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:07.504604    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.504697    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.504772    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.504849    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.504936    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.505061    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.505189    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.505197    3215 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:33:07.571066    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:33:07.571081    3215 buildroot.go:70] root file system type: tmpfs
	I0916 03:33:07.571159    3215 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:33:07.571173    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.571322    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.571419    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.571513    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.571606    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.571751    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.571889    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.571931    3215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:33:07.649104    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:33:07.649126    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.649269    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.649371    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.649450    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.649533    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.649665    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.649818    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.649830    3215 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:33:09.222903    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:33:09.222918    3215 machine.go:96] duration metric: took 13.147651717s to provisionDockerMachine
	I0916 03:33:09.222930    3215 start.go:293] postStartSetup for "ha-632000" (driver="hyperkit")
	I0916 03:33:09.222938    3215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:33:09.222948    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.223165    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:33:09.223181    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.223274    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.223366    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.223452    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.223531    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:09.263235    3215 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:33:09.267664    3215 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:33:09.267680    3215 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 03:33:09.267791    3215 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 03:33:09.267981    3215 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> 16492.pem in /etc/ssl/certs
	I0916 03:33:09.267987    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /etc/ssl/certs/16492.pem
	I0916 03:33:09.268197    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 03:33:09.275726    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:33:09.310027    3215 start.go:296] duration metric: took 87.088891ms for postStartSetup
	I0916 03:33:09.310053    3215 fix.go:56] duration metric: took 13.428812423s for fixHost
	I0916 03:33:09.310081    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.310216    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.310300    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.310400    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.310486    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.310608    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:09.310743    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:09.310754    3215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:33:09.375918    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482789.535889203
	
	I0916 03:33:09.375929    3215 fix.go:216] guest clock: 1726482789.535889203
	I0916 03:33:09.375934    3215 fix.go:229] Guest: 2024-09-16 03:33:09.535889203 -0700 PDT Remote: 2024-09-16 03:33:09.310063 -0700 PDT m=+13.866322037 (delta=225.826203ms)
	I0916 03:33:09.375954    3215 fix.go:200] guest clock delta is within tolerance: 225.826203ms
	I0916 03:33:09.375957    3215 start.go:83] releasing machines lock for "ha-632000", held for 13.494753864s
	I0916 03:33:09.375973    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376091    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:33:09.376174    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376468    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376565    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376645    3215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:33:09.376675    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.376728    3215 ssh_runner.go:195] Run: cat /version.json
	I0916 03:33:09.376737    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.376782    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.376831    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.376914    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.376927    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.377022    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.377041    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.377106    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:09.377126    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:09.456514    3215 ssh_runner.go:195] Run: systemctl --version
	I0916 03:33:09.461506    3215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:33:09.465762    3215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:33:09.465810    3215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:33:09.477939    3215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:33:09.477953    3215 start.go:495] detecting cgroup driver to use...
	I0916 03:33:09.478049    3215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:33:09.492694    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:33:09.501292    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:33:09.509688    3215 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:33:09.509733    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:33:09.518278    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:33:09.526726    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:33:09.534992    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:33:09.543323    3215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:33:09.551873    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:33:09.560327    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:33:09.568584    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:33:09.576791    3215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:33:09.584186    3215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:33:09.591628    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:09.683919    3215 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:33:09.701171    3215 start.go:495] detecting cgroup driver to use...
	I0916 03:33:09.701254    3215 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:33:09.719887    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:33:09.733645    3215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:33:09.752372    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:33:09.763867    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:33:09.774813    3215 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:33:09.796608    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:33:09.806720    3215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:33:09.821625    3215 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:33:09.824603    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:33:09.831721    3215 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:33:09.845160    3215 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:33:09.936544    3215 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:33:10.047205    3215 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:33:10.047277    3215 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:33:10.062093    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:10.159868    3215 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:33:12.438637    3215 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.278785972s)
	I0916 03:33:12.438709    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 03:33:12.450131    3215 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 03:33:12.464365    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:33:12.475529    3215 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 03:33:12.581964    3215 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 03:33:12.692206    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:12.795426    3215 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 03:33:12.809639    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:33:12.820588    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:12.917759    3215 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 03:33:12.971962    3215 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 03:33:12.972066    3215 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 03:33:12.976365    3215 start.go:563] Will wait 60s for crictl version
	I0916 03:33:12.976415    3215 ssh_runner.go:195] Run: which crictl
	I0916 03:33:12.979580    3215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 03:33:13.004331    3215 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 03:33:13.004419    3215 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:33:13.021239    3215 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:33:13.062090    3215 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 03:33:13.062165    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:33:13.062503    3215 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0916 03:33:13.066248    3215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:33:13.076647    3215 kubeadm.go:883] updating cluster {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 03:33:13.076709    3215 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:33:13.076786    3215 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:33:13.087609    3215 docker.go:685] Got preloaded images: 
	I0916 03:33:13.087621    3215 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0916 03:33:13.087677    3215 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 03:33:13.095744    3215 ssh_runner.go:195] Run: which lz4
	I0916 03:33:13.098494    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 03:33:13.098618    3215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 03:33:13.101595    3215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 03:33:13.101612    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0916 03:33:14.084317    3215 docker.go:649] duration metric: took 985.769483ms to copy over tarball
	I0916 03:33:14.084385    3215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 03:33:16.270832    3215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186464882s)
	I0916 03:33:16.270853    3215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 03:33:16.296645    3215 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 03:33:16.305085    3215 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0916 03:33:16.318614    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:16.421166    3215 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:33:18.737622    3215 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.316472307s)
	I0916 03:33:18.737735    3215 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:33:18.752879    3215 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 03:33:18.752896    3215 cache_images.go:84] Images are preloaded, skipping loading
	I0916 03:33:18.752905    3215 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0916 03:33:18.752990    3215 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-632000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 03:33:18.753073    3215 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 03:33:18.789519    3215 cni.go:84] Creating CNI manager for ""
	I0916 03:33:18.789532    3215 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:33:18.789538    3215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 03:33:18.789553    3215 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-632000 NodeName:ha-632000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 03:33:18.789647    3215 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-632000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 03:33:18.789726    3215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 03:33:18.798052    3215 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 03:33:18.798104    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 03:33:18.806056    3215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0916 03:33:18.819451    3215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 03:33:18.832642    3215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0916 03:33:18.846172    3215 ssh_runner.go:195] Run: grep 192.169.0.5	control-plane.minikube.internal$ /etc/hosts
	I0916 03:33:18.849147    3215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:33:18.859202    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:18.951797    3215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:33:18.967824    3215 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000 for IP: 192.169.0.5
	I0916 03:33:18.967844    3215 certs.go:194] generating shared ca certs ...
	I0916 03:33:18.967855    3215 certs.go:226] acquiring lock for ca certs: {Name:mkbe6743287669047aae1be1aaf47d24793fafa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:18.968046    3215 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key
	I0916 03:33:18.968119    3215 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key
	I0916 03:33:18.968130    3215 certs.go:256] generating profile certs ...
	I0916 03:33:18.968187    3215 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key
	I0916 03:33:18.968202    3215 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt with IP's: []
	I0916 03:33:19.115081    3215 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt ...
	I0916 03:33:19.115096    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt: {Name:mk46c1c0ef51002f68e28a5ff2adf0ca45fc8f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.115393    3215 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key ...
	I0916 03:33:19.115401    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key: {Name:mk2fd0817be29d26677a71912f5f31a0061cebca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.115607    3215 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250
	I0916 03:33:19.115623    3215 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5]
	I0916 03:33:19.247064    3215 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250 ...
	I0916 03:33:19.247077    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250: {Name:mkb3fc96934d50a5857a7bb2474da42e6bb4ee05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.247354    3215 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250 ...
	I0916 03:33:19.247363    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250: {Name:mk201d7e07e1cd31e308f5f1b72647c4ee60af0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.247562    3215 certs.go:381] copying /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt
	I0916 03:33:19.247731    3215 certs.go:385] copying /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key
	I0916 03:33:19.247889    3215 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key
	I0916 03:33:19.247905    3215 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt with IP's: []
	I0916 03:33:19.281009    3215 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt ...
	I0916 03:33:19.281017    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt: {Name:mk60aa0a5a5eb7665c3030f065b1020c5c196352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.281271    3215 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key ...
	I0916 03:33:19.281278    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key: {Name:mkbdfac964940ca71119764662d5d8f688d3a383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.281517    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 03:33:19.281543    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 03:33:19.281562    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 03:33:19.281599    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 03:33:19.281632    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 03:33:19.281685    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 03:33:19.281730    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 03:33:19.281781    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 03:33:19.281919    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem (1338 bytes)
	W0916 03:33:19.281965    3215 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649_empty.pem, impossibly tiny 0 bytes
	I0916 03:33:19.281972    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 03:33:19.282028    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem (1082 bytes)
	I0916 03:33:19.282097    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem (1123 bytes)
	I0916 03:33:19.282141    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem (1675 bytes)
	I0916 03:33:19.282236    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:33:19.282270    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.282290    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem -> /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.282334    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.282838    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 03:33:19.304358    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 03:33:19.323828    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 03:33:19.345178    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 03:33:19.364867    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0916 03:33:19.384268    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 03:33:19.404216    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 03:33:19.423944    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 03:33:19.443472    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 03:33:19.463114    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem --> /usr/share/ca-certificates/1649.pem (1338 bytes)
	I0916 03:33:19.482459    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /usr/share/ca-certificates/16492.pem (1708 bytes)
	I0916 03:33:19.502483    3215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 03:33:19.515926    3215 ssh_runner.go:195] Run: openssl version
	I0916 03:33:19.520096    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1649.pem && ln -fs /usr/share/ca-certificates/1649.pem /etc/ssl/certs/1649.pem"
	I0916 03:33:19.528244    3215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.531642    3215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:24 /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.531679    3215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.535928    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1649.pem /etc/ssl/certs/51391683.0"
	I0916 03:33:19.544083    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16492.pem && ln -fs /usr/share/ca-certificates/16492.pem /etc/ssl/certs/16492.pem"
	I0916 03:33:19.552339    3215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.555606    3215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:24 /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.555645    3215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.559871    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16492.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 03:33:19.568117    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 03:33:19.576404    3215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.579723    3215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.579766    3215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.583979    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 03:33:19.592225    3215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 03:33:19.595276    3215 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 03:33:19.595317    3215 kubeadm.go:392] StartCluster: {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:33:19.595419    3215 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:33:19.607275    3215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 03:33:19.617743    3215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 03:33:19.627110    3215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 03:33:19.638606    3215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 03:33:19.638617    3215 kubeadm.go:157] found existing configuration files:
	
	I0916 03:33:19.638678    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 03:33:19.646937    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 03:33:19.646992    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 03:33:19.657157    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 03:33:19.665311    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 03:33:19.665364    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 03:33:19.673591    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 03:33:19.681424    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 03:33:19.681466    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 03:33:19.689446    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 03:33:19.697315    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 03:33:19.697361    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 03:33:19.705435    3215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 03:33:19.767025    3215 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 03:33:19.767126    3215 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 03:33:19.839383    3215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 03:33:19.839477    3215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 03:33:19.839569    3215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 03:33:19.851158    3215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 03:33:19.907269    3215 out.go:235]   - Generating certificates and keys ...
	I0916 03:33:19.907322    3215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 03:33:19.907385    3215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 03:33:19.958129    3215 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 03:33:20.052533    3215 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 03:33:20.318871    3215 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 03:33:20.475785    3215 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 03:33:20.681854    3215 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 03:33:20.681946    3215 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-632000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0916 03:33:20.851582    3215 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 03:33:20.851680    3215 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-632000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0916 03:33:20.958930    3215 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 03:33:21.180837    3215 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 03:33:21.364391    3215 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 03:33:21.364452    3215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 03:33:21.537997    3215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 03:33:21.700525    3215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 03:33:21.962237    3215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 03:33:22.078095    3215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 03:33:22.379417    3215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 03:33:22.379815    3215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 03:33:22.381566    3215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 03:33:22.402746    3215 out.go:235]   - Booting up control plane ...
	I0916 03:33:22.402821    3215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 03:33:22.402884    3215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 03:33:22.402931    3215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 03:33:22.403013    3215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 03:33:22.403090    3215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 03:33:22.403130    3215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 03:33:22.507967    3215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 03:33:22.508073    3215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 03:33:23.010071    3215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.491973ms
	I0916 03:33:23.010162    3215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 03:33:27.009710    3215 kubeadm.go:310] [api-check] The API server is healthy after 4.002390619s
	I0916 03:33:27.020463    3215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 03:33:27.029414    3215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 03:33:27.046102    3215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 03:33:27.046245    3215 kubeadm.go:310] [mark-control-plane] Marking the node ha-632000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 03:33:27.052141    3215 kubeadm.go:310] [bootstrap-token] Using token: y8yedo.jt70eht9od170wkp
	I0916 03:33:27.089069    3215 out.go:235]   - Configuring RBAC rules ...
	I0916 03:33:27.089150    3215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 03:33:27.090796    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 03:33:27.131698    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 03:33:27.133472    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 03:33:27.135748    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 03:33:27.137503    3215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 03:33:27.431991    3215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 03:33:27.829543    3215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 03:33:28.417372    3215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 03:33:28.417957    3215 kubeadm.go:310] 
	I0916 03:33:28.418038    3215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 03:33:28.418050    3215 kubeadm.go:310] 
	I0916 03:33:28.418138    3215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 03:33:28.418146    3215 kubeadm.go:310] 
	I0916 03:33:28.418174    3215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 03:33:28.418232    3215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 03:33:28.418284    3215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 03:33:28.418293    3215 kubeadm.go:310] 
	I0916 03:33:28.418344    3215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 03:33:28.418355    3215 kubeadm.go:310] 
	I0916 03:33:28.418389    3215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 03:33:28.418394    3215 kubeadm.go:310] 
	I0916 03:33:28.418430    3215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 03:33:28.418489    3215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 03:33:28.418537    3215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 03:33:28.418543    3215 kubeadm.go:310] 
	I0916 03:33:28.418609    3215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 03:33:28.418677    3215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 03:33:28.418689    3215 kubeadm.go:310] 
	I0916 03:33:28.418767    3215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y8yedo.jt70eht9od170wkp \
	I0916 03:33:28.418853    3215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b \
	I0916 03:33:28.418872    3215 kubeadm.go:310] 	--control-plane 
	I0916 03:33:28.418878    3215 kubeadm.go:310] 
	I0916 03:33:28.418947    3215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 03:33:28.418955    3215 kubeadm.go:310] 
	I0916 03:33:28.419020    3215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y8yedo.jt70eht9od170wkp \
	I0916 03:33:28.419109    3215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b 
	I0916 03:33:28.420099    3215 kubeadm.go:310] W0916 10:33:19.929578    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 03:33:28.420321    3215 kubeadm.go:310] W0916 10:33:19.930124    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 03:33:28.420411    3215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 03:33:28.420423    3215 cni.go:84] Creating CNI manager for ""
	I0916 03:33:28.420428    3215 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:33:28.442432    3215 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 03:33:28.515420    3215 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 03:33:28.520899    3215 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 03:33:28.520911    3215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 03:33:28.534957    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 03:33:28.738508    3215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 03:33:28.738581    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-632000 minikube.k8s.io/updated_at=2024_09_16T03_33_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-632000 minikube.k8s.io/primary=true
	I0916 03:33:28.738584    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:28.750245    3215 ops.go:34] apiserver oom_adj: -16
	I0916 03:33:28.862634    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:29.363585    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:29.862883    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:30.363463    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:30.862789    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:31.363423    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:31.863338    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:32.363928    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:32.863293    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:33.364118    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:33.429704    3215 kubeadm.go:1113] duration metric: took 4.691265697s to wait for elevateKubeSystemPrivileges
	I0916 03:33:33.429728    3215 kubeadm.go:394] duration metric: took 13.834637415s to StartCluster
	I0916 03:33:33.429742    3215 settings.go:142] acquiring lock: {Name:mk3e31582aa73ce4c6a48b30dd26240f48d1f910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:33.429834    3215 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:33:33.430304    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/kubeconfig: {Name:mk8fc931c106eab82d9969ffc5e44e59dbd815a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:33.430579    3215 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:33:33.430618    3215 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 03:33:33.430670    3215 addons.go:69] Setting storage-provisioner=true in profile "ha-632000"
	I0916 03:33:33.430685    3215 addons.go:234] Setting addon storage-provisioner=true in "ha-632000"
	I0916 03:33:33.430703    3215 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:33:33.430697    3215 addons.go:69] Setting default-storageclass=true in profile "ha-632000"
	I0916 03:33:33.430711    3215 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:33:33.430716    3215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-632000"
	I0916 03:33:33.430953    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.430973    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.430991    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.431016    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.439627    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50915
	I0916 03:33:33.439795    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50917
	I0916 03:33:33.439974    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.440135    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.440315    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.440324    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.440429    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.440437    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.440555    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.440626    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.440745    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:33:33.440842    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:33:33.440906    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:33:33.440925    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.440947    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.442880    3215 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:33:33.443112    3215 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5442720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:33:33.443500    3215 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 03:33:33.443659    3215 addons.go:234] Setting addon default-storageclass=true in "ha-632000"
	I0916 03:33:33.443680    3215 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:33:33.443902    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.443934    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.449558    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50919
	I0916 03:33:33.449892    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.450216    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.450227    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.450460    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.450582    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:33:33.450662    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:33:33.450730    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:33:33.451674    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:33.451984    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50921
	I0916 03:33:33.453648    3215 out.go:177] * Verifying Kubernetes components...
	I0916 03:33:33.454306    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.476174    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.476201    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.476655    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.477354    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.477389    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.486360    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50923
	I0916 03:33:33.486688    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.487027    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.487052    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.487289    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.487415    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:33:33.487496    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:33:33.487566    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:33:33.488487    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:33.488617    3215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 03:33:33.488624    3215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 03:33:33.488638    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:33.488727    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:33.488819    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:33.488900    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:33.488984    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:33.495582    3215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 03:33:33.532585    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:33.536171    3215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 03:33:33.568840    3215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:33:33.568856    3215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 03:33:33.568874    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:33.569037    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:33.569132    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:33.569233    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:33.569318    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:33.677716    3215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:33:33.760690    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.760708    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.760911    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.760920    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.760922    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.760928    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.760933    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.760969    3215 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:33:33.761070    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.761081    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.761086    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.761141    3215 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 03:33:33.761157    3215 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 03:33:33.761231    3215 round_trippers.go:463] GET https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 03:33:33.761240    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:33.761248    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:33.761252    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:33.761223    3215 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5442720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:33:33.761463    3215 node_ready.go:35] waiting up to 6m0s for node "ha-632000" to be "Ready" ...
	I0916 03:33:33.761537    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:33.761544    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:33.761552    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:33.761556    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:33.770075    3215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:33:33.770800    3215 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 03:33:33.771460    3215 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 03:33:33.771842    3215 round_trippers.go:463] PUT https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 03:33:33.771850    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:33.771856    3215 round_trippers.go:473]     Content-Type: application/json
	I0916 03:33:33.771860    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:33.771862    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:33.773823    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:33.773917    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.773926    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.774061    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.774105    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.774114    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.986780    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.986793    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.986952    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.986961    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.986989    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.986995    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.987002    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.987133    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.987142    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:34.026048    3215 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 03:33:34.084170    3215 addons.go:510] duration metric: took 653.569526ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 03:33:34.262134    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:34.262146    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:34.262152    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:34.262155    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:34.263878    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:34.763483    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:34.763507    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:34.763520    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:34.763528    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:34.765986    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:35.261684    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:35.261700    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:35.261709    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:35.261713    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:35.264051    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:35.761645    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:35.761683    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:35.761695    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:35.761704    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:35.763760    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:35.764181    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:36.262850    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:36.262862    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:36.262869    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:36.262872    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:36.264350    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:36.762036    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:36.762050    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:36.762056    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:36.762059    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:36.763513    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:37.261989    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:37.262001    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:37.262007    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:37.262011    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:37.263638    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:37.762293    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:37.762305    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:37.762312    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:37.762316    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:37.764045    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:37.764533    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:38.262246    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:38.262263    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:38.262271    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:38.262275    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:38.264310    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:38.762849    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:38.762871    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:38.762882    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:38.762888    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:38.765298    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:39.262381    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:39.262399    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:39.262407    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:39.262412    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:39.264385    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:39.763479    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:39.763501    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:39.763513    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:39.763521    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:39.767242    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:39.767764    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:40.262657    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:40.262677    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:40.262689    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:40.262696    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:40.264759    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:40.763577    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:40.763600    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:40.763611    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:40.763616    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:40.766324    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:41.263184    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:41.263211    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:41.263222    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:41.263228    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:41.265807    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:41.762540    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:41.762565    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:41.762576    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:41.762582    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:41.765323    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:42.261698    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:42.261721    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:42.261747    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:42.261751    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:42.263901    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:42.264240    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:42.763304    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:42.763320    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:42.763340    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:42.763346    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:42.764940    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:43.261975    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:43.261995    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:43.262012    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:43.262017    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:43.264597    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:43.761694    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:43.761720    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:43.761731    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:43.761737    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:43.764275    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:44.262603    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:44.262627    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:44.262638    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:44.262646    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:44.265393    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:44.265969    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:44.763032    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:44.763049    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:44.763057    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:44.763063    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:44.765005    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:45.263561    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:45.263618    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:45.263633    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:45.263642    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:45.266255    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:45.761651    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:45.761677    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:45.761689    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:45.761694    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:45.764133    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:46.261907    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:46.261932    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:46.261943    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:46.261976    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:46.264321    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:46.761893    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:46.761915    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:46.761926    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:46.761933    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:46.764551    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:46.765163    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:47.262696    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:47.262753    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:47.262769    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:47.262774    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:47.265457    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:47.761981    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:47.761997    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:47.762003    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:47.762011    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:47.763360    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:48.263538    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:48.263560    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:48.263572    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:48.263579    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:48.266157    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:48.761473    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:48.761492    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:48.761539    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:48.761551    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:48.763706    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:49.261567    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:49.261622    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:49.261636    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:49.261656    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:49.264556    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:49.265001    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:49.761753    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:49.761778    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:49.761791    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:49.761798    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:49.764494    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:50.261540    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:50.261562    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:50.261574    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:50.261581    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:50.263837    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:50.762142    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:50.762165    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:50.762177    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:50.762182    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:50.765624    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:51.262576    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:51.262598    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:51.262610    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:51.262615    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:51.265022    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:51.265430    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:51.761696    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:51.761721    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:51.761732    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:51.761737    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:51.764270    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:52.262594    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:52.262620    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:52.262640    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:52.262647    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:52.265282    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:52.761489    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:52.761511    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:52.761522    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:52.761529    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:52.764663    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:53.262096    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:53.262117    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.262128    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.262135    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.264544    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:53.265052    3215 node_ready.go:49] node "ha-632000" has status "Ready":"True"
	I0916 03:33:53.265068    3215 node_ready.go:38] duration metric: took 19.503901344s for node "ha-632000" to be "Ready" ...
	I0916 03:33:53.265078    3215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:33:53.265127    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:53.265136    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.265143    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.265149    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.267123    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:53.270799    3215 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:53.270865    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:33:53.270870    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.270876    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.270880    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.274733    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:53.278057    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:53.278070    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.278085    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.278088    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.283035    3215 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 03:33:53.771015    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:33:53.771028    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.771034    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.771037    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.774699    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:53.775096    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:53.775104    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.775114    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.775117    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.779172    3215 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 03:33:54.272320    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:33:54.272341    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.272352    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.272360    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.278703    3215 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 03:33:54.279202    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.279210    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.279215    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.279218    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.280322    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.280625    3215 pod_ready.go:93] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.280634    3215 pod_ready.go:82] duration metric: took 1.009838309s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.280640    3215 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.280678    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kmtsq
	I0916 03:33:54.280683    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.280688    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.280693    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.281730    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.282128    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.282138    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.282144    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.282147    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.283306    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.283662    3215 pod_ready.go:93] pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.283671    3215 pod_ready.go:82] duration metric: took 3.025186ms for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.283677    3215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.283707    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-632000
	I0916 03:33:54.283712    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.283717    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.283721    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.284794    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.285105    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.285112    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.285118    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.285121    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.286066    3215 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:33:54.286344    3215 pod_ready.go:93] pod "etcd-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.286352    3215 pod_ready.go:82] duration metric: took 2.670668ms for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.286360    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.286389    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-632000
	I0916 03:33:54.286395    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.286401    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.286410    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.287350    3215 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:33:54.287704    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.287712    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.287717    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.287721    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.290567    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.291091    3215 pod_ready.go:93] pod "kube-apiserver-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.291099    3215 pod_ready.go:82] duration metric: took 4.734285ms for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.291107    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.291144    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:33:54.291149    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.291154    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.291159    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.292596    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.462366    3215 request.go:632] Waited for 169.108796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.462403    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.462409    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.462418    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.462424    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.464555    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.464828    3215 pod_ready.go:93] pod "kube-controller-manager-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.464837    3215 pod_ready.go:82] duration metric: took 173.727944ms for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.464857    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.664146    3215 request.go:632] Waited for 199.19688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:33:54.664283    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:33:54.664294    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.664305    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.664313    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.667027    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.863026    3215 request.go:632] Waited for 195.502853ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.863072    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.863080    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.863111    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.863131    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.865708    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.866269    3215 pod_ready.go:93] pod "kube-proxy-g95xd" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.866281    3215 pod_ready.go:82] duration metric: took 401.424869ms for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.866289    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:55.062545    3215 request.go:632] Waited for 196.212796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:33:55.062598    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:33:55.062648    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.062662    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.062671    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.065409    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.264190    3215 request.go:632] Waited for 198.096577ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:55.264338    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:55.264348    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.264359    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.264365    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.267110    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.267603    3215 pod_ready.go:93] pod "kube-scheduler-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:55.267614    3215 pod_ready.go:82] duration metric: took 401.325423ms for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:55.267623    3215 pod_ready.go:39] duration metric: took 2.002567067s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:33:55.267637    3215 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:33:55.267712    3215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:33:55.281508    3215 api_server.go:72] duration metric: took 21.851262532s to wait for apiserver process to appear ...
	I0916 03:33:55.281520    3215 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:33:55.281536    3215 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:33:55.284999    3215 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0916 03:33:55.285036    3215 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0916 03:33:55.285041    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.285048    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.285053    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.285742    3215 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:33:55.285839    3215 api_server.go:141] control plane version: v1.31.1
	I0916 03:33:55.285849    3215 api_server.go:131] duration metric: took 4.325313ms to wait for apiserver health ...
	I0916 03:33:55.285854    3215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:33:55.463701    3215 request.go:632] Waited for 177.750875ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.463788    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.463797    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.463808    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.463818    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.466896    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:55.469533    3215 system_pods.go:59] 9 kube-system pods found
	I0916 03:33:55.469549    3215 system_pods.go:61] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:33:55.469554    3215 system_pods.go:61] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:33:55.469557    3215 system_pods.go:61] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:33:55.469561    3215 system_pods.go:61] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:33:55.469564    3215 system_pods.go:61] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:33:55.469571    3215 system_pods.go:61] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:33:55.469575    3215 system_pods.go:61] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:33:55.469579    3215 system_pods.go:61] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:33:55.469582    3215 system_pods.go:61] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running
	I0916 03:33:55.469587    3215 system_pods.go:74] duration metric: took 183.731771ms to wait for pod list to return data ...
	I0916 03:33:55.469594    3215 default_sa.go:34] waiting for default service account to be created ...
	I0916 03:33:55.663051    3215 request.go:632] Waited for 193.382125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:33:55.663136    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:33:55.663146    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.663157    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.663163    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.665451    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.665623    3215 default_sa.go:45] found service account: "default"
	I0916 03:33:55.665632    3215 default_sa.go:55] duration metric: took 196.037439ms for default service account to be created ...
	I0916 03:33:55.665640    3215 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 03:33:55.862921    3215 request.go:632] Waited for 197.220841ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.862981    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.862988    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.862996    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.863002    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.865581    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.868473    3215 system_pods.go:86] 9 kube-system pods found
	I0916 03:33:55.868489    3215 system_pods.go:89] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:33:55.868493    3215 system_pods.go:89] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:33:55.868497    3215 system_pods.go:89] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:33:55.868500    3215 system_pods.go:89] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:33:55.868503    3215 system_pods.go:89] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:33:55.868507    3215 system_pods.go:89] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:33:55.868510    3215 system_pods.go:89] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:33:55.868513    3215 system_pods.go:89] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:33:55.868516    3215 system_pods.go:89] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running
	I0916 03:33:55.868520    3215 system_pods.go:126] duration metric: took 202.879593ms to wait for k8s-apps to be running ...
	I0916 03:33:55.868528    3215 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 03:33:55.868586    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:33:55.879099    3215 system_svc.go:56] duration metric: took 10.567187ms WaitForService to wait for kubelet
	I0916 03:33:55.879113    3215 kubeadm.go:582] duration metric: took 22.44887903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:33:55.879124    3215 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:33:56.063126    3215 request.go:632] Waited for 183.899083ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0916 03:33:56.063217    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0916 03:33:56.063229    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:56.063241    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:56.063252    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:56.066353    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:56.066655    3215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:33:56.066668    3215 node_conditions.go:123] node cpu capacity is 2
	I0916 03:33:56.066677    3215 node_conditions.go:105] duration metric: took 187.551212ms to run NodePressure ...
	I0916 03:33:56.066686    3215 start.go:241] waiting for startup goroutines ...
	I0916 03:33:56.066691    3215 start.go:246] waiting for cluster config update ...
	I0916 03:33:56.066699    3215 start.go:255] writing updated cluster config ...
	I0916 03:33:56.067035    3215 ssh_runner.go:195] Run: rm -f paused
	I0916 03:33:56.108029    3215 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0916 03:33:56.129524    3215 out.go:201] 
	W0916 03:33:56.152241    3215 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0916 03:33:56.173991    3215 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0916 03:33:56.216451    3215 out.go:177] * Done! kubectl is now configured to use "ha-632000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.603901952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.605276678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612426899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612491453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612504097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612583075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614297908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614331056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614338873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614390000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 cri-dockerd[1168]: time="2024-09-16T10:33:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea367824f74364da419f871819f256ac691303099b06d3a3f7f5572625ffbd40/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:33:53 ha-632000 cri-dockerd[1168]: time="2024-09-16T10:33:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24a42374ff36346ea697e812cc1c7677f33e83a3205f01392b17d167e6e4970a/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:33:53 ha-632000 cri-dockerd[1168]: time="2024-09-16T10:33:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/943f7896df505e59c0950d925676b2731e201c7d27502d47f29d2798ebd886ab/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847458491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847572022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847587186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847723496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857014491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857158139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857644130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857888245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910561263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910626962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910639704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910733189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9cdebd7b24ee9       c69fa2e9cbf5f                                                                              4 seconds ago       Running             coredns                   0                   943f7896df505       coredns-7c65d6cfc9-jkv8h
	1a9b8d16193d8       c69fa2e9cbf5f                                                                              4 seconds ago       Running             coredns                   0                   ea367824f7436       coredns-7c65d6cfc9-kmtsq
	b3d5179d61251       6e38f40d628db                                                                              4 seconds ago       Running             storage-provisioner       0                   24a42374ff363       storage-provisioner
	da46c43d13fe8       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166   20 seconds ago      Running             kindnet-cni               0                   b16c72338b6ac       kindnet-gs2gm
	fae437a3e2407       60c005f310ff3                                                                              23 seconds ago      Running             kube-proxy                0                   7d1ed68d9ed61       kube-proxy-g95xd
	6dd1718ae6cb7       2e96e5913fc06                                                                              34 seconds ago      Running             etcd                      0                   61ff2b4664268       etcd-ha-632000
	69e9ce19f1363       9aa1fad941575                                                                              34 seconds ago      Running             kube-scheduler            0                   cf944230e79eb       kube-scheduler-ha-632000
	95857f7ef6fa1       6bab7719df100                                                                              34 seconds ago      Running             kube-apiserver            0                   3a54b2c915f89       kube-apiserver-ha-632000
	24a44bacb8b1e       175ffd71cce3d                                                                              34 seconds ago      Running             kube-controller-manager   0                   04ad9902d4d19       kube-controller-manager-ha-632000
	
	
	==> coredns [1a9b8d16193d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [9cdebd7b24ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               ha-632000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-632000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-632000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T03_33_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-632000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:33:53 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:33:53 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:33:53 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:33:53 +0000   Mon, 16 Sep 2024 10:33:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-632000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb973afb192b4b438c93e65f3c567fea
	  System UUID:                151047b3-0000-0000-af67-8767f9a495bd
	  Boot ID:                    ee848727-c854-498b-9230-f8d196d3a6a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jkv8h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25s
	  kube-system                 coredns-7c65d6cfc9-kmtsq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25s
	  kube-system                 etcd-ha-632000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         31s
	  kube-system                 kindnet-gs2gm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26s
	  kube-system                 kube-apiserver-ha-632000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-ha-632000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-g95xd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-ha-632000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  31s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node ha-632000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node ha-632000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node ha-632000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node ha-632000 event: Registered Node ha-632000 in Controller
	  Normal  NodeReady                5s    kubelet          Node ha-632000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006805] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.892462] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.228476] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.385529] systemd-fstab-generator[466]: Ignoring "noauto" option for root device
	[  +0.109893] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +1.756837] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.253717] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.098359] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.123426] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +2.158628] kauditd_printk_skb: 167 callbacks suppressed
	[  +0.263225] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.109664] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.105364] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.122543] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.500942] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +2.205265] kauditd_printk_skb: 136 callbacks suppressed
	[  +0.330462] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +3.547813] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	[  +1.149789] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.893434] systemd-fstab-generator[2050]: Ignoring "noauto" option for root device
	[  +6.116227] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +0.079212] kauditd_printk_skb: 32 callbacks suppressed
	[ +19.823672] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [6dd1718ae6cb] <==
	{"level":"info","ts":"2024-09-16T10:33:24.047730Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:24.052160Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:24.052175Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:24.047758Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:33:24.052213Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:33:24.707415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:33:24.707500Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:33:24.707537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 1"}
	{"level":"info","ts":"2024-09-16T10:33:24.707557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.712635Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-632000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:24.712748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.714615Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.715358Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.717393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.717649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.717680Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.718273Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.719184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-09-16T10:33:24.719319Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719511Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.723827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:33:58 up 1 min,  0 users,  load average: 0.31, 0.10, 0.03
	Linux ha-632000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [da46c43d13fe] <==
	I0916 10:33:38.105068       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:38.105822       1 main.go:139] hostIP = 192.169.0.5
	podIP = 192.169.0.5
	I0916 10:33:38.105896       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:38.105923       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:38.105937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:38.701206       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0916 10:33:48.705636       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:33:48.705736       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95857f7ef6fa] <==
	I0916 10:33:25.880296       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:33:25.880403       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:33:25.880451       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:33:25.884846       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:33:25.894553       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:33:25.894636       1 policy_source.go:224] refreshing policies
	E0916 10:33:25.907371       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0916 10:33:25.931125       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 10:33:25.978348       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:33:26.110430       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:33:26.783260       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:33:26.787077       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:33:26.787170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:33:27.105540       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:33:27.130234       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:33:27.191665       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:33:27.202457       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0916 10:33:27.203307       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:33:27.211217       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:33:27.835196       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:33:27.994584       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:33:28.000965       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:33:28.007254       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:33:32.589191       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:33:33.389076       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [24a44bacb8b1] <==
	I0916 10:33:32.792285       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.813766       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.836606       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:33:32.836825       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:33:33.208030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283196       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283305       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:33.489386       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:33.785544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="393.158805ms"
	I0916 10:33:33.795508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.792999ms"
	I0916 10:33:33.796330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.465µs"
	I0916 10:33:33.796593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.299µs"
	I0916 10:33:53.221882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.228026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.232440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.065µs"
	I0916 10:33:53.240523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.002µs"
	I0916 10:33:53.245409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.983µs"
	I0916 10:33:53.253146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="32.846µs"
	I0916 10:33:54.145943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.356µs"
	I0916 10:33:54.168878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.87555ms"
	I0916 10:33:54.169182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.016µs"
	I0916 10:33:54.179810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.838071ms"
	I0916 10:33:54.180086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.84µs"
	I0916 10:33:57.576580       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 10:33:58.801252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	
	
	==> kube-proxy [fae437a3e240] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:33:34.691611       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:33:34.697203       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0916 10:33:34.697256       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:34.722746       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:33:34.723060       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:33:34.723106       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:34.725578       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:34.725970       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:34.725998       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:34.726895       1 config.go:199] "Starting service config controller"
	I0916 10:33:34.726932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:34.726949       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:34.726952       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:34.727508       1 config.go:328] "Starting node config controller"
	I0916 10:33:34.727534       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:34.827447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:34.827457       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:34.827564       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69e9ce19f136] <==
	W0916 10:33:25.862831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:33:25.865183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.862775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:25.865397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:33:25.865433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:25.865448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.708747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:33:26.708911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.709690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:26.709836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.726430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:33:26.726462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.767336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:26.767420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.769370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:26.769455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.794289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:33:26.794346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:33:27.460783       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.697919    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98c1fdde-9eb3-4546-9d59-b18e4367907d-lib-modules\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698239    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs95q\" (UniqueName: \"kubernetes.io/projected/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-api-access-zs95q\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698532    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgrhc\" (UniqueName: \"kubernetes.io/projected/9931709a-b521-4642-8c91-c0bb52b9f284-kube-api-access-jgrhc\") pod \"kindnet-gs2gm\" (UID: \"9931709a-b521-4642-8c91-c0bb52b9f284\") " pod="kube-system/kindnet-gs2gm"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698741    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9931709a-b521-4642-8c91-c0bb52b9f284-cni-cfg\") pod \"kindnet-gs2gm\" (UID: \"9931709a-b521-4642-8c91-c0bb52b9f284\") " pod="kube-system/kindnet-gs2gm"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698943    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-proxy\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.699188    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98c1fdde-9eb3-4546-9d59-b18e4367907d-xtables-lock\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.699409    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9931709a-b521-4642-8c91-c0bb52b9f284-xtables-lock\") pod \"kindnet-gs2gm\" (UID: \"9931709a-b521-4642-8c91-c0bb52b9f284\") " pod="kube-system/kindnet-gs2gm"
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807551    2057 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807574    2057 projected.go:194] Error preparing data for projected volume kube-api-access-jgrhc for pod kube-system/kindnet-gs2gm: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807616    2057 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9931709a-b521-4642-8c91-c0bb52b9f284-kube-api-access-jgrhc podName:9931709a-b521-4642-8c91-c0bb52b9f284 nodeName:}" failed. No retries permitted until 2024-09-16 10:33:34.307603452 +0000 UTC m=+6.545421988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jgrhc" (UniqueName: "kubernetes.io/projected/9931709a-b521-4642-8c91-c0bb52b9f284-kube-api-access-jgrhc") pod "kindnet-gs2gm" (UID: "9931709a-b521-4642-8c91-c0bb52b9f284") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807751    2057 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807760    2057 projected.go:194] Error preparing data for projected volume kube-api-access-zs95q for pod kube-system/kube-proxy-g95xd: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807778    2057 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-api-access-zs95q podName:98c1fdde-9eb3-4546-9d59-b18e4367907d nodeName:}" failed. No retries permitted until 2024-09-16 10:33:34.307772689 +0000 UTC m=+6.545591224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zs95q" (UniqueName: "kubernetes.io/projected/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-api-access-zs95q") pod "kube-proxy-g95xd" (UID: "98c1fdde-9eb3-4546-9d59-b18e4367907d") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:34 ha-632000 kubelet[2057]: I0916 10:33:34.312060    2057 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:33:35 ha-632000 kubelet[2057]: I0916 10:33:35.003443    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g95xd" podStartSLOduration=3.003431476 podStartE2EDuration="3.003431476s" podCreationTimestamp="2024-09-16 10:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:33:35.003334041 +0000 UTC m=+7.241152584" watchObservedRunningTime="2024-09-16 10:33:35.003431476 +0000 UTC m=+7.241250019"
	Sep 16 10:33:39 ha-632000 kubelet[2057]: I0916 10:33:39.041882    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gs2gm" podStartSLOduration=3.8103986340000002 podStartE2EDuration="7.041866411s" podCreationTimestamp="2024-09-16 10:33:32 +0000 UTC" firstStartedPulling="2024-09-16 10:33:34.639398593 +0000 UTC m=+6.877217125" lastFinishedPulling="2024-09-16 10:33:37.870866368 +0000 UTC m=+10.108684902" observedRunningTime="2024-09-16 10:33:39.041153884 +0000 UTC m=+11.278972427" watchObservedRunningTime="2024-09-16 10:33:39.041866411 +0000 UTC m=+11.279684948"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.214934    2057 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357436    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/75558cac-fdbb-4a54-94c5-31f3f3d980c3-tmp\") pod \"storage-provisioner\" (UID: \"75558cac-fdbb-4a54-94c5-31f3f3d980c3\") " pod="kube-system/storage-provisioner"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357559    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume\") pod \"coredns-7c65d6cfc9-kmtsq\" (UID: \"535c3a5b-013f-4bdc-94c4-b109c6f43b0b\") " pod="kube-system/coredns-7c65d6cfc9-kmtsq"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357633    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkzf\" (UniqueName: \"kubernetes.io/projected/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-kube-api-access-phkzf\") pod \"coredns-7c65d6cfc9-kmtsq\" (UID: \"535c3a5b-013f-4bdc-94c4-b109c6f43b0b\") " pod="kube-system/coredns-7c65d6cfc9-kmtsq"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357666    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume\") pod \"coredns-7c65d6cfc9-jkv8h\" (UID: \"b174db06-0ade-4594-8b4b-6058301491f8\") " pod="kube-system/coredns-7c65d6cfc9-jkv8h"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357687    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jb5n\" (UniqueName: \"kubernetes.io/projected/b174db06-0ade-4594-8b4b-6058301491f8-kube-api-access-9jb5n\") pod \"coredns-7c65d6cfc9-jkv8h\" (UID: \"b174db06-0ade-4594-8b4b-6058301491f8\") " pod="kube-system/coredns-7c65d6cfc9-jkv8h"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357709    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbqn\" (UniqueName: \"kubernetes.io/projected/75558cac-fdbb-4a54-94c5-31f3f3d980c3-kube-api-access-pvbqn\") pod \"storage-provisioner\" (UID: \"75558cac-fdbb-4a54-94c5-31f3f3d980c3\") " pod="kube-system/storage-provisioner"
	Sep 16 10:33:54 ha-632000 kubelet[2057]: I0916 10:33:54.160076    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podStartSLOduration=21.160061336 podStartE2EDuration="21.160061336s" podCreationTimestamp="2024-09-16 10:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:33:54.148085555 +0000 UTC m=+26.385904097" watchObservedRunningTime="2024-09-16 10:33:54.160061336 +0000 UTC m=+26.397879874"
	Sep 16 10:33:54 ha-632000 kubelet[2057]: I0916 10:33:54.192691    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podStartSLOduration=21.192643097 podStartE2EDuration="21.192643097s" podCreationTimestamp="2024-09-16 10:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:33:54.174926711 +0000 UTC m=+26.412745253" watchObservedRunningTime="2024-09-16 10:33:54.192643097 +0000 UTC m=+26.430461634"
	
	
	==> storage-provisioner [b3d5179d6125] <==
	I0916 10:33:53.925247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:33:53.932000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:33:53.932238       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:33:53.952610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:33:53.952935       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-632000_27daa7da-1dfc-47e3-9900-caef0657c763!
	I0916 10:33:53.954583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"faa6a23c-3add-429a-8ab0-8b9c7c81cc28", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-632000_27daa7da-1dfc-47e3-9900-caef0657c763 became leader
	I0916 10:33:54.054153       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-632000_27daa7da-1dfc-47e3-9900-caef0657c763!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-632000 -n ha-632000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-632000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (3.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-632000" in json of 'profile list' to have "Degraded" status but have "Running" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Running\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersi
on\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\"
:\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-632000 logs -n 25: (2.256732765s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-632000 -- apply -f             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- rollout status       | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | deployment/busybox                   |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  -- nslookup    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-632000 -v=7                | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node stop m02 -v=7         | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node start m02 -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000 -v=7               | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-632000 -v=7                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:32 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-632000 --wait=true -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:33 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	| node    | ha-632000 node delete m03 -v=7       | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:32:55
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:32:55.479806    3215 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:32:55.479979    3215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:55.479984    3215 out.go:358] Setting ErrFile to fd 2...
	I0916 03:32:55.479988    3215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:32:55.480154    3215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:32:55.481720    3215 out.go:352] Setting JSON to false
	I0916 03:32:55.503766    3215 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1946,"bootTime":1726480829,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:32:55.503923    3215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:32:55.525873    3215 out.go:177] * [ha-632000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:32:55.569056    3215 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:32:55.569122    3215 notify.go:220] Checking for updates...
	I0916 03:32:55.611676    3215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:32:55.632614    3215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:32:55.653732    3215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:32:55.674798    3215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:32:55.695893    3215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:32:55.717487    3215 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:32:55.717654    3215 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:32:55.718427    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:55.718486    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:55.727827    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50892
	I0916 03:32:55.728193    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:55.728608    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:32:55.728620    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:55.728861    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:55.728985    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:55.757983    3215 out.go:177] * Using the hyperkit driver based on existing profile
	I0916 03:32:55.799698    3215 start.go:297] selected driver: hyperkit
	I0916 03:32:55.799725    3215 start.go:901] validating driver "hyperkit" against &{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:32:55.799929    3215 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:32:55.800120    3215 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:32:55.800326    3215 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:32:55.810037    3215 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:32:55.813846    3215 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:55.813866    3215 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:32:55.816398    3215 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:32:55.816433    3215 cni.go:84] Creating CNI manager for ""
	I0916 03:32:55.816474    3215 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:32:55.816535    3215 start.go:340] cluster config:
	{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:32:55.816615    3215 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:32:55.858742    3215 out.go:177] * Starting "ha-632000" primary control-plane node in "ha-632000" cluster
	I0916 03:32:55.879767    3215 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:32:55.879887    3215 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:32:55.879907    3215 cache.go:56] Caching tarball of preloaded images
	I0916 03:32:55.880125    3215 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 03:32:55.880144    3215 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:32:55.880309    3215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:32:55.881254    3215 start.go:360] acquireMachinesLock for ha-632000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:32:55.881410    3215 start.go:364] duration metric: took 129.858µs to acquireMachinesLock for "ha-632000"
	I0916 03:32:55.881446    3215 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:32:55.881464    3215 fix.go:54] fixHost starting: 
	I0916 03:32:55.881899    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:32:55.881941    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:32:55.891224    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50894
	I0916 03:32:55.891581    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:32:55.891955    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:32:55.891969    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:32:55.892169    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:32:55.892298    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:55.892402    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:32:55.892496    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:55.892579    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 2677
	I0916 03:32:55.893499    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 2677 missing from process table
	I0916 03:32:55.893531    3215 fix.go:112] recreateIfNeeded on ha-632000: state=Stopped err=<nil>
	I0916 03:32:55.893549    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	W0916 03:32:55.893640    3215 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:32:55.914812    3215 out.go:177] * Restarting existing hyperkit VM for "ha-632000" ...
	I0916 03:32:55.956786    3215 main.go:141] libmachine: (ha-632000) Calling .Start
	I0916 03:32:55.957059    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:55.957099    3215 main.go:141] libmachine: (ha-632000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid
	I0916 03:32:55.958906    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 2677 missing from process table
	I0916 03:32:55.958920    3215 main.go:141] libmachine: (ha-632000) DBG | pid 2677 is in state "Stopped"
	I0916 03:32:55.958942    3215 main.go:141] libmachine: (ha-632000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid...
	I0916 03:32:55.959251    3215 main.go:141] libmachine: (ha-632000) DBG | Using UUID 1510d94a-ca8f-47b3-af67-8767f9a495bd
	I0916 03:32:56.070187    3215 main.go:141] libmachine: (ha-632000) DBG | Generated MAC 1a:88:5b:ab:d5:de
	I0916 03:32:56.070219    3215 main.go:141] libmachine: (ha-632000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000
	I0916 03:32:56.070325    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:32:56.070364    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:32:56.070401    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1510d94a-ca8f-47b3-af67-8767f9a495bd", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"}
	I0916 03:32:56.070433    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1510d94a-ca8f-47b3-af67-8767f9a495bd -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"
	I0916 03:32:56.070442    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 03:32:56.071925    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 DEBUG: hyperkit: Pid is 3227
	I0916 03:32:56.072259    3215 main.go:141] libmachine: (ha-632000) DBG | Attempt 0
	I0916 03:32:56.072278    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:32:56.072356    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:32:56.074045    3215 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:32:56.074108    3215 main.go:141] libmachine: (ha-632000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:32:56.074131    3215 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95a02}
	I0916 03:32:56.074143    3215 main.go:141] libmachine: (ha-632000) DBG | Found match: 1a:88:5b:ab:d5:de
	I0916 03:32:56.074178    3215 main.go:141] libmachine: (ha-632000) DBG | IP: 192.169.0.5
	I0916 03:32:56.074212    3215 main.go:141] libmachine: (ha-632000) Calling .GetConfigRaw
	I0916 03:32:56.074819    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:32:56.075006    3215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:32:56.075469    3215 machine.go:93] provisionDockerMachine start ...
	I0916 03:32:56.075480    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:32:56.075609    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:32:56.075727    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:32:56.075849    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:56.075988    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:32:56.076094    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:32:56.076241    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:32:56.076440    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:32:56.076449    3215 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 03:32:56.079763    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 03:32:56.132093    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 03:32:56.132743    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:32:56.132759    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:32:56.132779    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:32:56.132793    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:32:56.511645    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 03:32:56.511711    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 03:32:56.626770    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:32:56.626798    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:32:56.626807    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:32:56.626814    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:32:56.627640    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 03:32:56.627650    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 03:33:02.223309    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 03:33:02.223373    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 03:33:02.223381    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 03:33:02.247110    3215 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:33:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 03:33:07.150140    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 03:33:07.150156    3215 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:33:07.150328    3215 buildroot.go:166] provisioning hostname "ha-632000"
	I0916 03:33:07.150340    3215 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:33:07.150432    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.150541    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.150645    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.150761    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.150866    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.151049    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.151232    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.151240    3215 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-632000 && echo "ha-632000" | sudo tee /etc/hostname
	I0916 03:33:07.226819    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-632000
	
	I0916 03:33:07.226837    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.226964    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.227086    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.227191    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.227294    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.227437    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.227577    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.227587    3215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:33:07.299987    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:33:07.300009    3215 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 03:33:07.300033    3215 buildroot.go:174] setting up certificates
	I0916 03:33:07.300040    3215 provision.go:84] configureAuth start
	I0916 03:33:07.300048    3215 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:33:07.300187    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:33:07.300306    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.300398    3215 provision.go:143] copyHostCerts
	I0916 03:33:07.300432    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:33:07.300507    3215 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem, removing ...
	I0916 03:33:07.300515    3215 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:33:07.300651    3215 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 03:33:07.300860    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:33:07.300900    3215 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem, removing ...
	I0916 03:33:07.300905    3215 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:33:07.300984    3215 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 03:33:07.301151    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:33:07.301190    3215 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem, removing ...
	I0916 03:33:07.301195    3215 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:33:07.301303    3215 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 03:33:07.301475    3215 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.ha-632000 san=[127.0.0.1 192.169.0.5 ha-632000 localhost minikube]
	I0916 03:33:07.406966    3215 provision.go:177] copyRemoteCerts
	I0916 03:33:07.407026    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:33:07.407042    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.407190    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.407288    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.407393    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.407475    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:07.447794    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 03:33:07.447868    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 03:33:07.466620    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 03:33:07.466680    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 03:33:07.485470    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 03:33:07.485532    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 03:33:07.504299    3215 provision.go:87] duration metric: took 204.246783ms to configureAuth
	I0916 03:33:07.504314    3215 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:33:07.504463    3215 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:33:07.504476    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:07.504604    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.504697    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.504772    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.504849    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.504936    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.505061    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.505189    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.505197    3215 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:33:07.571066    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:33:07.571081    3215 buildroot.go:70] root file system type: tmpfs
	I0916 03:33:07.571159    3215 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:33:07.571173    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.571322    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.571419    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.571513    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.571606    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.571751    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.571889    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.571931    3215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:33:07.649104    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:33:07.649126    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:07.649269    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:07.649371    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.649450    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:07.649533    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:07.649665    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:07.649818    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:07.649830    3215 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:33:09.222903    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:33:09.222918    3215 machine.go:96] duration metric: took 13.147651717s to provisionDockerMachine
	I0916 03:33:09.222930    3215 start.go:293] postStartSetup for "ha-632000" (driver="hyperkit")
	I0916 03:33:09.222938    3215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:33:09.222948    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.223165    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:33:09.223181    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.223274    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.223366    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.223452    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.223531    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:09.263235    3215 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:33:09.267664    3215 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:33:09.267680    3215 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 03:33:09.267791    3215 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 03:33:09.267981    3215 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> 16492.pem in /etc/ssl/certs
	I0916 03:33:09.267987    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /etc/ssl/certs/16492.pem
	I0916 03:33:09.268197    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 03:33:09.275726    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:33:09.310027    3215 start.go:296] duration metric: took 87.088891ms for postStartSetup
	I0916 03:33:09.310053    3215 fix.go:56] duration metric: took 13.428812423s for fixHost
	I0916 03:33:09.310081    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.310216    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.310300    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.310400    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.310486    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.310608    3215 main.go:141] libmachine: Using SSH client type: native
	I0916 03:33:09.310743    3215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d6c820] 0x3d6f500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:33:09.310754    3215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:33:09.375918    3215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482789.535889203
	
	I0916 03:33:09.375929    3215 fix.go:216] guest clock: 1726482789.535889203
	I0916 03:33:09.375934    3215 fix.go:229] Guest: 2024-09-16 03:33:09.535889203 -0700 PDT Remote: 2024-09-16 03:33:09.310063 -0700 PDT m=+13.866322037 (delta=225.826203ms)
	I0916 03:33:09.375954    3215 fix.go:200] guest clock delta is within tolerance: 225.826203ms
	I0916 03:33:09.375957    3215 start.go:83] releasing machines lock for "ha-632000", held for 13.494753864s
	I0916 03:33:09.375973    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376091    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:33:09.376174    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376468    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376565    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:09.376645    3215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:33:09.376675    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.376728    3215 ssh_runner.go:195] Run: cat /version.json
	I0916 03:33:09.376737    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:09.376782    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.376831    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:09.376914    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.376927    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:09.377022    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.377041    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:09.377106    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:09.377126    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:09.456514    3215 ssh_runner.go:195] Run: systemctl --version
	I0916 03:33:09.461506    3215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:33:09.465762    3215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:33:09.465810    3215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:33:09.477939    3215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:33:09.477953    3215 start.go:495] detecting cgroup driver to use...
	I0916 03:33:09.478049    3215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:33:09.492694    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:33:09.501292    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:33:09.509688    3215 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:33:09.509733    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:33:09.518278    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:33:09.526726    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:33:09.534992    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:33:09.543323    3215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:33:09.551873    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:33:09.560327    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:33:09.568584    3215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:33:09.576791    3215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:33:09.584186    3215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:33:09.591628    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:09.683919    3215 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:33:09.701171    3215 start.go:495] detecting cgroup driver to use...
	I0916 03:33:09.701254    3215 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:33:09.719887    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:33:09.733645    3215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:33:09.752372    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:33:09.763867    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:33:09.774813    3215 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:33:09.796608    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:33:09.806720    3215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:33:09.821625    3215 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:33:09.824603    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:33:09.831721    3215 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:33:09.845160    3215 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:33:09.936544    3215 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:33:10.047205    3215 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:33:10.047277    3215 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:33:10.062093    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:10.159868    3215 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:33:12.438637    3215 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.278785972s)
	I0916 03:33:12.438709    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 03:33:12.450131    3215 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 03:33:12.464365    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:33:12.475529    3215 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 03:33:12.581964    3215 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 03:33:12.692206    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:12.795426    3215 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 03:33:12.809639    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:33:12.820588    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:12.917759    3215 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 03:33:12.971962    3215 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 03:33:12.972066    3215 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 03:33:12.976365    3215 start.go:563] Will wait 60s for crictl version
	I0916 03:33:12.976415    3215 ssh_runner.go:195] Run: which crictl
	I0916 03:33:12.979580    3215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 03:33:13.004331    3215 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 03:33:13.004419    3215 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:33:13.021239    3215 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:33:13.062090    3215 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 03:33:13.062165    3215 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:33:13.062503    3215 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0916 03:33:13.066248    3215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:33:13.076647    3215 kubeadm.go:883] updating cluster {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 03:33:13.076709    3215 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:33:13.076786    3215 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:33:13.087609    3215 docker.go:685] Got preloaded images: 
	I0916 03:33:13.087621    3215 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0916 03:33:13.087677    3215 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 03:33:13.095744    3215 ssh_runner.go:195] Run: which lz4
	I0916 03:33:13.098494    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 03:33:13.098618    3215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 03:33:13.101595    3215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 03:33:13.101612    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0916 03:33:14.084317    3215 docker.go:649] duration metric: took 985.769483ms to copy over tarball
	I0916 03:33:14.084385    3215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 03:33:16.270832    3215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.186464882s)
	I0916 03:33:16.270853    3215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 03:33:16.296645    3215 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 03:33:16.305085    3215 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0916 03:33:16.318614    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:16.421166    3215 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:33:18.737622    3215 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.316472307s)
	I0916 03:33:18.737735    3215 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:33:18.752879    3215 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 03:33:18.752896    3215 cache_images.go:84] Images are preloaded, skipping loading
	I0916 03:33:18.752905    3215 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0916 03:33:18.752990    3215 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-632000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 03:33:18.753073    3215 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 03:33:18.789519    3215 cni.go:84] Creating CNI manager for ""
	I0916 03:33:18.789532    3215 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:33:18.789538    3215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 03:33:18.789553    3215 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-632000 NodeName:ha-632000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 03:33:18.789647    3215 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-632000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 03:33:18.789726    3215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 03:33:18.798052    3215 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 03:33:18.798104    3215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 03:33:18.806056    3215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0916 03:33:18.819451    3215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 03:33:18.832642    3215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0916 03:33:18.846172    3215 ssh_runner.go:195] Run: grep 192.169.0.5	control-plane.minikube.internal$ /etc/hosts
	I0916 03:33:18.849147    3215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:33:18.859202    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:18.951797    3215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:33:18.967824    3215 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000 for IP: 192.169.0.5
	I0916 03:33:18.967844    3215 certs.go:194] generating shared ca certs ...
	I0916 03:33:18.967855    3215 certs.go:226] acquiring lock for ca certs: {Name:mkbe6743287669047aae1be1aaf47d24793fafa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:18.968046    3215 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key
	I0916 03:33:18.968119    3215 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key
	I0916 03:33:18.968130    3215 certs.go:256] generating profile certs ...
	I0916 03:33:18.968187    3215 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key
	I0916 03:33:18.968202    3215 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt with IP's: []
	I0916 03:33:19.115081    3215 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt ...
	I0916 03:33:19.115096    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt: {Name:mk46c1c0ef51002f68e28a5ff2adf0ca45fc8f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.115393    3215 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key ...
	I0916 03:33:19.115401    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key: {Name:mk2fd0817be29d26677a71912f5f31a0061cebca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.115607    3215 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250
	I0916 03:33:19.115623    3215 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5]
	I0916 03:33:19.247064    3215 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250 ...
	I0916 03:33:19.247077    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250: {Name:mkb3fc96934d50a5857a7bb2474da42e6bb4ee05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.247354    3215 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250 ...
	I0916 03:33:19.247363    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250: {Name:mk201d7e07e1cd31e308f5f1b72647c4ee60af0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.247562    3215 certs.go:381] copying /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.63f9e250 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt
	I0916 03:33:19.247731    3215 certs.go:385] copying /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key
	I0916 03:33:19.247889    3215 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key
	I0916 03:33:19.247905    3215 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt with IP's: []
	I0916 03:33:19.281009    3215 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt ...
	I0916 03:33:19.281017    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt: {Name:mk60aa0a5a5eb7665c3030f065b1020c5c196352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.281271    3215 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key ...
	I0916 03:33:19.281278    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key: {Name:mkbdfac964940ca71119764662d5d8f688d3a383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:19.281517    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 03:33:19.281543    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 03:33:19.281562    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 03:33:19.281599    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 03:33:19.281632    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 03:33:19.281685    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 03:33:19.281730    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 03:33:19.281781    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 03:33:19.281919    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem (1338 bytes)
	W0916 03:33:19.281965    3215 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649_empty.pem, impossibly tiny 0 bytes
	I0916 03:33:19.281972    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 03:33:19.282028    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem (1082 bytes)
	I0916 03:33:19.282097    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem (1123 bytes)
	I0916 03:33:19.282141    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem (1675 bytes)
	I0916 03:33:19.282236    3215 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:33:19.282270    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.282290    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem -> /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.282334    3215 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.282838    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 03:33:19.304358    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 03:33:19.323828    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 03:33:19.345178    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 03:33:19.364867    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0916 03:33:19.384268    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 03:33:19.404216    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 03:33:19.423944    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 03:33:19.443472    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 03:33:19.463114    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem --> /usr/share/ca-certificates/1649.pem (1338 bytes)
	I0916 03:33:19.482459    3215 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /usr/share/ca-certificates/16492.pem (1708 bytes)
	I0916 03:33:19.502483    3215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 03:33:19.515926    3215 ssh_runner.go:195] Run: openssl version
	I0916 03:33:19.520096    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1649.pem && ln -fs /usr/share/ca-certificates/1649.pem /etc/ssl/certs/1649.pem"
	I0916 03:33:19.528244    3215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.531642    3215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:24 /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.531679    3215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1649.pem
	I0916 03:33:19.535928    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1649.pem /etc/ssl/certs/51391683.0"
	I0916 03:33:19.544083    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16492.pem && ln -fs /usr/share/ca-certificates/16492.pem /etc/ssl/certs/16492.pem"
	I0916 03:33:19.552339    3215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.555606    3215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:24 /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.555645    3215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16492.pem
	I0916 03:33:19.559871    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16492.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 03:33:19.568117    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 03:33:19.576404    3215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.579723    3215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.579766    3215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:33:19.583979    3215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 03:33:19.592225    3215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 03:33:19.595276    3215 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 03:33:19.595317    3215 kubeadm.go:392] StartCluster: {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:33:19.595419    3215 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:33:19.607275    3215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 03:33:19.617743    3215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 03:33:19.627110    3215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 03:33:19.638606    3215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 03:33:19.638617    3215 kubeadm.go:157] found existing configuration files:
	
	I0916 03:33:19.638678    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 03:33:19.646937    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 03:33:19.646992    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 03:33:19.657157    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 03:33:19.665311    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 03:33:19.665364    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 03:33:19.673591    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 03:33:19.681424    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 03:33:19.681466    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 03:33:19.689446    3215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 03:33:19.697315    3215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 03:33:19.697361    3215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 03:33:19.705435    3215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 03:33:19.767025    3215 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 03:33:19.767126    3215 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 03:33:19.839383    3215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 03:33:19.839477    3215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 03:33:19.839569    3215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 03:33:19.851158    3215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 03:33:19.907269    3215 out.go:235]   - Generating certificates and keys ...
	I0916 03:33:19.907322    3215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 03:33:19.907385    3215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 03:33:19.958129    3215 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 03:33:20.052533    3215 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 03:33:20.318871    3215 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 03:33:20.475785    3215 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 03:33:20.681854    3215 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 03:33:20.681946    3215 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-632000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0916 03:33:20.851582    3215 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 03:33:20.851680    3215 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-632000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I0916 03:33:20.958930    3215 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 03:33:21.180837    3215 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 03:33:21.364391    3215 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 03:33:21.364452    3215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 03:33:21.537997    3215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 03:33:21.700525    3215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 03:33:21.962237    3215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 03:33:22.078095    3215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 03:33:22.379417    3215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 03:33:22.379815    3215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 03:33:22.381566    3215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 03:33:22.402746    3215 out.go:235]   - Booting up control plane ...
	I0916 03:33:22.402821    3215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 03:33:22.402884    3215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 03:33:22.402931    3215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 03:33:22.403013    3215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 03:33:22.403090    3215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 03:33:22.403130    3215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 03:33:22.507967    3215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 03:33:22.508073    3215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 03:33:23.010071    3215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.491973ms
	I0916 03:33:23.010162    3215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 03:33:27.009710    3215 kubeadm.go:310] [api-check] The API server is healthy after 4.002390619s
	I0916 03:33:27.020463    3215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 03:33:27.029414    3215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 03:33:27.046102    3215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 03:33:27.046245    3215 kubeadm.go:310] [mark-control-plane] Marking the node ha-632000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 03:33:27.052141    3215 kubeadm.go:310] [bootstrap-token] Using token: y8yedo.jt70eht9od170wkp
	I0916 03:33:27.089069    3215 out.go:235]   - Configuring RBAC rules ...
	I0916 03:33:27.089150    3215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 03:33:27.090796    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 03:33:27.131698    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 03:33:27.133472    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 03:33:27.135748    3215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 03:33:27.137503    3215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 03:33:27.431991    3215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 03:33:27.829543    3215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 03:33:28.417372    3215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 03:33:28.417957    3215 kubeadm.go:310] 
	I0916 03:33:28.418038    3215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 03:33:28.418050    3215 kubeadm.go:310] 
	I0916 03:33:28.418138    3215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 03:33:28.418146    3215 kubeadm.go:310] 
	I0916 03:33:28.418174    3215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 03:33:28.418232    3215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 03:33:28.418284    3215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 03:33:28.418293    3215 kubeadm.go:310] 
	I0916 03:33:28.418344    3215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 03:33:28.418355    3215 kubeadm.go:310] 
	I0916 03:33:28.418389    3215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 03:33:28.418394    3215 kubeadm.go:310] 
	I0916 03:33:28.418430    3215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 03:33:28.418489    3215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 03:33:28.418537    3215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 03:33:28.418543    3215 kubeadm.go:310] 
	I0916 03:33:28.418609    3215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 03:33:28.418677    3215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 03:33:28.418689    3215 kubeadm.go:310] 
	I0916 03:33:28.418767    3215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y8yedo.jt70eht9od170wkp \
	I0916 03:33:28.418853    3215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b \
	I0916 03:33:28.418872    3215 kubeadm.go:310] 	--control-plane 
	I0916 03:33:28.418878    3215 kubeadm.go:310] 
	I0916 03:33:28.418947    3215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 03:33:28.418955    3215 kubeadm.go:310] 
	I0916 03:33:28.419020    3215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y8yedo.jt70eht9od170wkp \
	I0916 03:33:28.419109    3215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b 
	I0916 03:33:28.420099    3215 kubeadm.go:310] W0916 10:33:19.929578    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 03:33:28.420321    3215 kubeadm.go:310] W0916 10:33:19.930124    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 03:33:28.420411    3215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 03:33:28.420423    3215 cni.go:84] Creating CNI manager for ""
	I0916 03:33:28.420428    3215 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:33:28.442432    3215 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 03:33:28.515420    3215 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 03:33:28.520899    3215 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 03:33:28.520911    3215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 03:33:28.534957    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 03:33:28.738508    3215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 03:33:28.738581    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-632000 minikube.k8s.io/updated_at=2024_09_16T03_33_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-632000 minikube.k8s.io/primary=true
	I0916 03:33:28.738584    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:28.750245    3215 ops.go:34] apiserver oom_adj: -16
	I0916 03:33:28.862634    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:29.363585    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:29.862883    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:30.363463    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:30.862789    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:31.363423    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:31.863338    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:32.363928    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:32.863293    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:33.364118    3215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 03:33:33.429704    3215 kubeadm.go:1113] duration metric: took 4.691265697s to wait for elevateKubeSystemPrivileges
	I0916 03:33:33.429728    3215 kubeadm.go:394] duration metric: took 13.834637415s to StartCluster
	I0916 03:33:33.429742    3215 settings.go:142] acquiring lock: {Name:mk3e31582aa73ce4c6a48b30dd26240f48d1f910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:33.429834    3215 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:33:33.430304    3215 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/kubeconfig: {Name:mk8fc931c106eab82d9969ffc5e44e59dbd815a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:33:33.430579    3215 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:33:33.430618    3215 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 03:33:33.430670    3215 addons.go:69] Setting storage-provisioner=true in profile "ha-632000"
	I0916 03:33:33.430685    3215 addons.go:234] Setting addon storage-provisioner=true in "ha-632000"
	I0916 03:33:33.430703    3215 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:33:33.430697    3215 addons.go:69] Setting default-storageclass=true in profile "ha-632000"
	I0916 03:33:33.430711    3215 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:33:33.430716    3215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-632000"
	I0916 03:33:33.430953    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.430973    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.430991    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.431016    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.439627    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50915
	I0916 03:33:33.439795    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50917
	I0916 03:33:33.439974    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.440135    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.440315    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.440324    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.440429    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.440437    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.440555    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.440626    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.440745    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:33:33.440842    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:33:33.440906    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:33:33.440925    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.440947    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.442880    3215 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:33:33.443112    3215 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5442720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:33:33.443500    3215 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 03:33:33.443659    3215 addons.go:234] Setting addon default-storageclass=true in "ha-632000"
	I0916 03:33:33.443680    3215 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:33:33.443902    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.443934    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.449558    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50919
	I0916 03:33:33.449892    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.450216    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.450227    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.450460    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.450582    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:33:33.450662    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:33:33.450730    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:33:33.451674    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:33.451984    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50921
	I0916 03:33:33.453648    3215 out.go:177] * Verifying Kubernetes components...
	I0916 03:33:33.454306    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.476174    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.476201    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.476655    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.477354    3215 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:33:33.477389    3215 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:33:33.486360    3215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50923
	I0916 03:33:33.486688    3215 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:33:33.487027    3215 main.go:141] libmachine: Using API Version  1
	I0916 03:33:33.487052    3215 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:33:33.487289    3215 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:33:33.487415    3215 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:33:33.487496    3215 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:33:33.487566    3215 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:33:33.488487    3215 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:33:33.488617    3215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 03:33:33.488624    3215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 03:33:33.488638    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:33.488727    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:33.488819    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:33.488900    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:33.488984    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:33.495582    3215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 03:33:33.532585    3215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:33:33.536171    3215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 03:33:33.568840    3215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:33:33.568856    3215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 03:33:33.568874    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:33:33.569037    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:33:33.569132    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:33:33.569233    3215 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:33:33.569318    3215 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:33:33.677716    3215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:33:33.760690    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.760708    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.760911    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.760920    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.760922    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.760928    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.760933    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.760969    3215 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:33:33.761070    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.761081    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.761086    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.761141    3215 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 03:33:33.761157    3215 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 03:33:33.761231    3215 round_trippers.go:463] GET https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 03:33:33.761240    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:33.761248    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:33.761252    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:33.761223    3215 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x5442720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:33:33.761463    3215 node_ready.go:35] waiting up to 6m0s for node "ha-632000" to be "Ready" ...
	I0916 03:33:33.761537    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:33.761544    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:33.761552    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:33.761556    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:33.770075    3215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:33:33.770800    3215 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 03:33:33.771460    3215 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 03:33:33.771842    3215 round_trippers.go:463] PUT https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 03:33:33.771850    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:33.771856    3215 round_trippers.go:473]     Content-Type: application/json
	I0916 03:33:33.771860    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:33.771862    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:33.773823    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:33.773917    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.773926    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.774061    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.774105    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.774114    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.986780    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.986793    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.986952    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.986961    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:33.986989    3215 main.go:141] libmachine: Making call to close driver server
	I0916 03:33:33.986995    3215 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:33:33.987002    3215 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:33:33.987133    3215 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:33:33.987142    3215 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:33:34.026048    3215 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 03:33:34.084170    3215 addons.go:510] duration metric: took 653.569526ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 03:33:34.262134    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:34.262146    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:34.262152    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:34.262155    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:34.263878    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:34.763483    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:34.763507    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:34.763520    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:34.763528    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:34.765986    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:35.261684    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:35.261700    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:35.261709    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:35.261713    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:35.264051    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:35.761645    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:35.761683    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:35.761695    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:35.761704    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:35.763760    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:35.764181    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:36.262850    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:36.262862    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:36.262869    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:36.262872    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:36.264350    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:36.762036    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:36.762050    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:36.762056    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:36.762059    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:36.763513    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:37.261989    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:37.262001    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:37.262007    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:37.262011    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:37.263638    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:37.762293    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:37.762305    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:37.762312    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:37.762316    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:37.764045    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:37.764533    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:38.262246    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:38.262263    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:38.262271    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:38.262275    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:38.264310    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:38.762849    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:38.762871    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:38.762882    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:38.762888    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:38.765298    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:39.262381    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:39.262399    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:39.262407    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:39.262412    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:39.264385    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:39.763479    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:39.763501    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:39.763513    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:39.763521    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:39.767242    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:39.767764    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:40.262657    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:40.262677    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:40.262689    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:40.262696    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:40.264759    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:40.763577    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:40.763600    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:40.763611    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:40.763616    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:40.766324    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:41.263184    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:41.263211    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:41.263222    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:41.263228    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:41.265807    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:41.762540    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:41.762565    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:41.762576    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:41.762582    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:41.765323    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:42.261698    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:42.261721    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:42.261747    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:42.261751    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:42.263901    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:42.264240    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:42.763304    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:42.763320    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:42.763340    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:42.763346    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:42.764940    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:43.261975    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:43.261995    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:43.262012    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:43.262017    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:43.264597    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:43.761694    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:43.761720    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:43.761731    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:43.761737    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:43.764275    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:44.262603    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:44.262627    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:44.262638    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:44.262646    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:44.265393    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:44.265969    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:44.763032    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:44.763049    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:44.763057    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:44.763063    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:44.765005    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:45.263561    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:45.263618    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:45.263633    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:45.263642    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:45.266255    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:45.761651    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:45.761677    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:45.761689    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:45.761694    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:45.764133    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:46.261907    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:46.261932    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:46.261943    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:46.261976    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:46.264321    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:46.761893    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:46.761915    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:46.761926    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:46.761933    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:46.764551    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:46.765163    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:47.262696    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:47.262753    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:47.262769    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:47.262774    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:47.265457    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:47.761981    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:47.761997    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:47.762003    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:47.762011    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:47.763360    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:48.263538    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:48.263560    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:48.263572    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:48.263579    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:48.266157    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:48.761473    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:48.761492    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:48.761539    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:48.761551    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:48.763706    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:49.261567    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:49.261622    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:49.261636    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:49.261656    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:49.264556    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:49.265001    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:49.761753    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:49.761778    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:49.761791    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:49.761798    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:49.764494    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:50.261540    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:50.261562    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:50.261574    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:50.261581    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:50.263837    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:50.762142    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:50.762165    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:50.762177    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:50.762182    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:50.765624    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:51.262576    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:51.262598    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:51.262610    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:51.262615    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:51.265022    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:51.265430    3215 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:33:51.761696    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:51.761721    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:51.761732    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:51.761737    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:51.764270    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:52.262594    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:52.262620    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:52.262640    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:52.262647    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:52.265282    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:52.761489    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:52.761511    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:52.761522    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:52.761529    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:52.764663    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:53.262096    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:53.262117    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.262128    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.262135    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.264544    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:53.265052    3215 node_ready.go:49] node "ha-632000" has status "Ready":"True"
	I0916 03:33:53.265068    3215 node_ready.go:38] duration metric: took 19.503901344s for node "ha-632000" to be "Ready" ...
	I0916 03:33:53.265078    3215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:33:53.265127    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:53.265136    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.265143    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.265149    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.267123    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:53.270799    3215 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:53.270865    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:33:53.270870    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.270876    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.270880    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.274733    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:53.278057    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:53.278070    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.278085    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.278088    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.283035    3215 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 03:33:53.771015    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:33:53.771028    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.771034    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.771037    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.774699    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:53.775096    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:53.775104    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:53.775114    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:53.775117    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:53.779172    3215 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 03:33:54.272320    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:33:54.272341    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.272352    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.272360    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.278703    3215 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 03:33:54.279202    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.279210    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.279215    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.279218    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.280322    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.280625    3215 pod_ready.go:93] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.280634    3215 pod_ready.go:82] duration metric: took 1.009838309s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.280640    3215 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.280678    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kmtsq
	I0916 03:33:54.280683    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.280688    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.280693    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.281730    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.282128    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.282138    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.282144    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.282147    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.283306    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.283662    3215 pod_ready.go:93] pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.283671    3215 pod_ready.go:82] duration metric: took 3.025186ms for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.283677    3215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.283707    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-632000
	I0916 03:33:54.283712    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.283717    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.283721    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.284794    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.285105    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.285112    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.285118    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.285121    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.286066    3215 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:33:54.286344    3215 pod_ready.go:93] pod "etcd-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.286352    3215 pod_ready.go:82] duration metric: took 2.670668ms for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.286360    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.286389    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-632000
	I0916 03:33:54.286395    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.286401    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.286410    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.287350    3215 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:33:54.287704    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.287712    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.287717    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.287721    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.290567    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.291091    3215 pod_ready.go:93] pod "kube-apiserver-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.291099    3215 pod_ready.go:82] duration metric: took 4.734285ms for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.291107    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.291144    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:33:54.291149    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.291154    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.291159    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.292596    3215 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:33:54.462366    3215 request.go:632] Waited for 169.108796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.462403    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.462409    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.462418    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.462424    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.464555    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.464828    3215 pod_ready.go:93] pod "kube-controller-manager-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.464837    3215 pod_ready.go:82] duration metric: took 173.727944ms for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.464857    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.664146    3215 request.go:632] Waited for 199.19688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:33:54.664283    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:33:54.664294    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.664305    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.664313    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.667027    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.863026    3215 request.go:632] Waited for 195.502853ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.863072    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:54.863080    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:54.863111    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:54.863131    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:54.865708    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:54.866269    3215 pod_ready.go:93] pod "kube-proxy-g95xd" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:54.866281    3215 pod_ready.go:82] duration metric: took 401.424869ms for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:54.866289    3215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:55.062545    3215 request.go:632] Waited for 196.212796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:33:55.062598    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:33:55.062648    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.062662    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.062671    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.065409    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.264190    3215 request.go:632] Waited for 198.096577ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:55.264338    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:33:55.264348    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.264359    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.264365    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.267110    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.267603    3215 pod_ready.go:93] pod "kube-scheduler-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:33:55.267614    3215 pod_ready.go:82] duration metric: took 401.325423ms for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:33:55.267623    3215 pod_ready.go:39] duration metric: took 2.002567067s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:33:55.267637    3215 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:33:55.267712    3215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:33:55.281508    3215 api_server.go:72] duration metric: took 21.851262532s to wait for apiserver process to appear ...
	I0916 03:33:55.281520    3215 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:33:55.281536    3215 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:33:55.284999    3215 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0916 03:33:55.285036    3215 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0916 03:33:55.285041    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.285048    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.285053    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.285742    3215 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:33:55.285839    3215 api_server.go:141] control plane version: v1.31.1
	I0916 03:33:55.285849    3215 api_server.go:131] duration metric: took 4.325313ms to wait for apiserver health ...
	I0916 03:33:55.285854    3215 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:33:55.463701    3215 request.go:632] Waited for 177.750875ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.463788    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.463797    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.463808    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.463818    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.466896    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:55.469533    3215 system_pods.go:59] 9 kube-system pods found
	I0916 03:33:55.469549    3215 system_pods.go:61] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:33:55.469554    3215 system_pods.go:61] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:33:55.469557    3215 system_pods.go:61] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:33:55.469561    3215 system_pods.go:61] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:33:55.469564    3215 system_pods.go:61] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:33:55.469571    3215 system_pods.go:61] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:33:55.469575    3215 system_pods.go:61] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:33:55.469579    3215 system_pods.go:61] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:33:55.469582    3215 system_pods.go:61] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running
	I0916 03:33:55.469587    3215 system_pods.go:74] duration metric: took 183.731771ms to wait for pod list to return data ...
	I0916 03:33:55.469594    3215 default_sa.go:34] waiting for default service account to be created ...
	I0916 03:33:55.663051    3215 request.go:632] Waited for 193.382125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:33:55.663136    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:33:55.663146    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.663157    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.663163    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.665451    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.665623    3215 default_sa.go:45] found service account: "default"
	I0916 03:33:55.665632    3215 default_sa.go:55] duration metric: took 196.037439ms for default service account to be created ...
	I0916 03:33:55.665640    3215 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 03:33:55.862921    3215 request.go:632] Waited for 197.220841ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.862981    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:33:55.862988    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:55.862996    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:55.863002    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:55.865581    3215 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:33:55.868473    3215 system_pods.go:86] 9 kube-system pods found
	I0916 03:33:55.868489    3215 system_pods.go:89] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:33:55.868493    3215 system_pods.go:89] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:33:55.868497    3215 system_pods.go:89] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:33:55.868500    3215 system_pods.go:89] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:33:55.868503    3215 system_pods.go:89] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:33:55.868507    3215 system_pods.go:89] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:33:55.868510    3215 system_pods.go:89] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:33:55.868513    3215 system_pods.go:89] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:33:55.868516    3215 system_pods.go:89] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running
	I0916 03:33:55.868520    3215 system_pods.go:126] duration metric: took 202.879593ms to wait for k8s-apps to be running ...
	I0916 03:33:55.868528    3215 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 03:33:55.868586    3215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:33:55.879099    3215 system_svc.go:56] duration metric: took 10.567187ms WaitForService to wait for kubelet
	I0916 03:33:55.879113    3215 kubeadm.go:582] duration metric: took 22.44887903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:33:55.879124    3215 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:33:56.063126    3215 request.go:632] Waited for 183.899083ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0916 03:33:56.063217    3215 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0916 03:33:56.063229    3215 round_trippers.go:469] Request Headers:
	I0916 03:33:56.063241    3215 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:33:56.063252    3215 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:33:56.066353    3215 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:33:56.066655    3215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:33:56.066668    3215 node_conditions.go:123] node cpu capacity is 2
	I0916 03:33:56.066677    3215 node_conditions.go:105] duration metric: took 187.551212ms to run NodePressure ...
	I0916 03:33:56.066686    3215 start.go:241] waiting for startup goroutines ...
	I0916 03:33:56.066691    3215 start.go:246] waiting for cluster config update ...
	I0916 03:33:56.066699    3215 start.go:255] writing updated cluster config ...
	I0916 03:33:56.067035    3215 ssh_runner.go:195] Run: rm -f paused
	I0916 03:33:56.108029    3215 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0916 03:33:56.129524    3215 out.go:201] 
	W0916 03:33:56.152241    3215 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0916 03:33:56.173991    3215 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0916 03:33:56.216451    3215 out.go:177] * Done! kubectl is now configured to use "ha-632000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.603901952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.605276678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612426899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612491453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612504097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.612583075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614297908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614331056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614338873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.614390000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 cri-dockerd[1168]: time="2024-09-16T10:33:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea367824f74364da419f871819f256ac691303099b06d3a3f7f5572625ffbd40/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:33:53 ha-632000 cri-dockerd[1168]: time="2024-09-16T10:33:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24a42374ff36346ea697e812cc1c7677f33e83a3205f01392b17d167e6e4970a/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:33:53 ha-632000 cri-dockerd[1168]: time="2024-09-16T10:33:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/943f7896df505e59c0950d925676b2731e201c7d27502d47f29d2798ebd886ab/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847458491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847572022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847587186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.847723496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857014491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857158139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857644130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.857888245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910561263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910626962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910639704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:33:53 ha-632000 dockerd[1274]: time="2024-09-16T10:33:53.910733189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9cdebd7b24ee9       c69fa2e9cbf5f                                                                              7 seconds ago       Running             coredns                   0                   943f7896df505       coredns-7c65d6cfc9-jkv8h
	1a9b8d16193d8       c69fa2e9cbf5f                                                                              7 seconds ago       Running             coredns                   0                   ea367824f7436       coredns-7c65d6cfc9-kmtsq
	b3d5179d61251       6e38f40d628db                                                                              7 seconds ago       Running             storage-provisioner       0                   24a42374ff363       storage-provisioner
	da46c43d13fe8       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166   23 seconds ago      Running             kindnet-cni               0                   b16c72338b6ac       kindnet-gs2gm
	fae437a3e2407       60c005f310ff3                                                                              26 seconds ago      Running             kube-proxy                0                   7d1ed68d9ed61       kube-proxy-g95xd
	6dd1718ae6cb7       2e96e5913fc06                                                                              37 seconds ago      Running             etcd                      0                   61ff2b4664268       etcd-ha-632000
	69e9ce19f1363       9aa1fad941575                                                                              37 seconds ago      Running             kube-scheduler            0                   cf944230e79eb       kube-scheduler-ha-632000
	95857f7ef6fa1       6bab7719df100                                                                              37 seconds ago      Running             kube-apiserver            0                   3a54b2c915f89       kube-apiserver-ha-632000
	24a44bacb8b1e       175ffd71cce3d                                                                              37 seconds ago      Running             kube-controller-manager   0                   04ad9902d4d19       kube-controller-manager-ha-632000
	
	
	==> coredns [1a9b8d16193d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [9cdebd7b24ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               ha-632000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-632000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-632000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T03_33_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-632000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:33:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:33:58 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:33:58 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:33:58 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:33:58 +0000   Mon, 16 Sep 2024 10:33:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-632000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb973afb192b4b438c93e65f3c567fea
	  System UUID:                151047b3-0000-0000-af67-8767f9a495bd
	  Boot ID:                    ee848727-c854-498b-9230-f8d196d3a6a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jkv8h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28s
	  kube-system                 coredns-7c65d6cfc9-kmtsq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28s
	  kube-system                 etcd-ha-632000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         34s
	  kube-system                 kindnet-gs2gm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29s
	  kube-system                 kube-apiserver-ha-632000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-ha-632000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-g95xd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-ha-632000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node ha-632000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node ha-632000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node ha-632000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node ha-632000 event: Registered Node ha-632000 in Controller
	  Normal  NodeReady                8s    kubelet          Node ha-632000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006805] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.892462] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.228476] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.385529] systemd-fstab-generator[466]: Ignoring "noauto" option for root device
	[  +0.109893] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
	[  +1.756837] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.253717] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.098359] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.123426] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +2.158628] kauditd_printk_skb: 167 callbacks suppressed
	[  +0.263225] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.109664] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.105364] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.122543] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.500942] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +2.205265] kauditd_printk_skb: 136 callbacks suppressed
	[  +0.330462] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +3.547813] systemd-fstab-generator[1639]: Ignoring "noauto" option for root device
	[  +1.149789] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.893434] systemd-fstab-generator[2050]: Ignoring "noauto" option for root device
	[  +6.116227] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +0.079212] kauditd_printk_skb: 32 callbacks suppressed
	[ +19.823672] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [6dd1718ae6cb] <==
	{"level":"info","ts":"2024-09-16T10:33:24.047730Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:24.052160Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:24.052175Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:33:24.047758Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:33:24.052213Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:33:24.707415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:33:24.707500Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:33:24.707537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 1"}
	{"level":"info","ts":"2024-09-16T10:33:24.707557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.712635Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-632000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:24.712748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.714615Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.715358Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.717393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.717649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.717680Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.718273Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.719184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-09-16T10:33:24.719319Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719511Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.723827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:34:01 up 1 min,  0 users,  load average: 0.31, 0.10, 0.03
	Linux ha-632000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [da46c43d13fe] <==
	I0916 10:33:38.105068       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:38.105822       1 main.go:139] hostIP = 192.169.0.5
	podIP = 192.169.0.5
	I0916 10:33:38.105896       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:38.105923       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:38.105937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:38.701206       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0916 10:33:48.705636       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:33:48.705736       1 main.go:299] handling current node
	I0916 10:33:58.706377       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:33:58.706440       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95857f7ef6fa] <==
	I0916 10:33:25.880296       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:33:25.880403       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:33:25.880451       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:33:25.884846       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:33:25.894553       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:33:25.894636       1 policy_source.go:224] refreshing policies
	E0916 10:33:25.907371       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0916 10:33:25.931125       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 10:33:25.978348       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:33:26.110430       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:33:26.783260       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:33:26.787077       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:33:26.787170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:33:27.105540       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:33:27.130234       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:33:27.191665       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:33:27.202457       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I0916 10:33:27.203307       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:33:27.211217       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:33:27.835196       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:33:27.994584       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:33:28.000965       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:33:28.007254       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:33:32.589191       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:33:33.389076       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [24a44bacb8b1] <==
	I0916 10:33:32.792285       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.813766       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.836606       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:33:32.836825       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:33:33.208030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283196       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283305       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:33.489386       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:33.785544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="393.158805ms"
	I0916 10:33:33.795508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.792999ms"
	I0916 10:33:33.796330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.465µs"
	I0916 10:33:33.796593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.299µs"
	I0916 10:33:53.221882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.228026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.232440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.065µs"
	I0916 10:33:53.240523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.002µs"
	I0916 10:33:53.245409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.983µs"
	I0916 10:33:53.253146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="32.846µs"
	I0916 10:33:54.145943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.356µs"
	I0916 10:33:54.168878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.87555ms"
	I0916 10:33:54.169182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.016µs"
	I0916 10:33:54.179810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.838071ms"
	I0916 10:33:54.180086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.84µs"
	I0916 10:33:57.576580       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 10:33:58.801252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	
	
	==> kube-proxy [fae437a3e240] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:33:34.691611       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:33:34.697203       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0916 10:33:34.697256       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:34.722746       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:33:34.723060       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:33:34.723106       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:34.725578       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:34.725970       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:34.725998       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:34.726895       1 config.go:199] "Starting service config controller"
	I0916 10:33:34.726932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:34.726949       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:34.726952       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:34.727508       1 config.go:328] "Starting node config controller"
	I0916 10:33:34.727534       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:34.827447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:34.827457       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:34.827564       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69e9ce19f136] <==
	W0916 10:33:25.862831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:33:25.865183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.862775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:25.865397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:33:25.865433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:25.865448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.708747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:33:26.708911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.709690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:26.709836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.726430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:33:26.726462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.767336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:26.767420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.769370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:26.769455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.794289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:33:26.794346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:33:27.460783       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.697919    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98c1fdde-9eb3-4546-9d59-b18e4367907d-lib-modules\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698239    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs95q\" (UniqueName: \"kubernetes.io/projected/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-api-access-zs95q\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698532    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgrhc\" (UniqueName: \"kubernetes.io/projected/9931709a-b521-4642-8c91-c0bb52b9f284-kube-api-access-jgrhc\") pod \"kindnet-gs2gm\" (UID: \"9931709a-b521-4642-8c91-c0bb52b9f284\") " pod="kube-system/kindnet-gs2gm"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698741    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9931709a-b521-4642-8c91-c0bb52b9f284-cni-cfg\") pod \"kindnet-gs2gm\" (UID: \"9931709a-b521-4642-8c91-c0bb52b9f284\") " pod="kube-system/kindnet-gs2gm"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.698943    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-proxy\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.699188    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98c1fdde-9eb3-4546-9d59-b18e4367907d-xtables-lock\") pod \"kube-proxy-g95xd\" (UID: \"98c1fdde-9eb3-4546-9d59-b18e4367907d\") " pod="kube-system/kube-proxy-g95xd"
	Sep 16 10:33:32 ha-632000 kubelet[2057]: I0916 10:33:32.699409    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9931709a-b521-4642-8c91-c0bb52b9f284-xtables-lock\") pod \"kindnet-gs2gm\" (UID: \"9931709a-b521-4642-8c91-c0bb52b9f284\") " pod="kube-system/kindnet-gs2gm"
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807551    2057 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807574    2057 projected.go:194] Error preparing data for projected volume kube-api-access-jgrhc for pod kube-system/kindnet-gs2gm: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807616    2057 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9931709a-b521-4642-8c91-c0bb52b9f284-kube-api-access-jgrhc podName:9931709a-b521-4642-8c91-c0bb52b9f284 nodeName:}" failed. No retries permitted until 2024-09-16 10:33:34.307603452 +0000 UTC m=+6.545421988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jgrhc" (UniqueName: "kubernetes.io/projected/9931709a-b521-4642-8c91-c0bb52b9f284-kube-api-access-jgrhc") pod "kindnet-gs2gm" (UID: "9931709a-b521-4642-8c91-c0bb52b9f284") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807751    2057 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807760    2057 projected.go:194] Error preparing data for projected volume kube-api-access-zs95q for pod kube-system/kube-proxy-g95xd: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:33 ha-632000 kubelet[2057]: E0916 10:33:33.807778    2057 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-api-access-zs95q podName:98c1fdde-9eb3-4546-9d59-b18e4367907d nodeName:}" failed. No retries permitted until 2024-09-16 10:33:34.307772689 +0000 UTC m=+6.545591224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zs95q" (UniqueName: "kubernetes.io/projected/98c1fdde-9eb3-4546-9d59-b18e4367907d-kube-api-access-zs95q") pod "kube-proxy-g95xd" (UID: "98c1fdde-9eb3-4546-9d59-b18e4367907d") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:33:34 ha-632000 kubelet[2057]: I0916 10:33:34.312060    2057 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:33:35 ha-632000 kubelet[2057]: I0916 10:33:35.003443    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g95xd" podStartSLOduration=3.003431476 podStartE2EDuration="3.003431476s" podCreationTimestamp="2024-09-16 10:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:33:35.003334041 +0000 UTC m=+7.241152584" watchObservedRunningTime="2024-09-16 10:33:35.003431476 +0000 UTC m=+7.241250019"
	Sep 16 10:33:39 ha-632000 kubelet[2057]: I0916 10:33:39.041882    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gs2gm" podStartSLOduration=3.8103986340000002 podStartE2EDuration="7.041866411s" podCreationTimestamp="2024-09-16 10:33:32 +0000 UTC" firstStartedPulling="2024-09-16 10:33:34.639398593 +0000 UTC m=+6.877217125" lastFinishedPulling="2024-09-16 10:33:37.870866368 +0000 UTC m=+10.108684902" observedRunningTime="2024-09-16 10:33:39.041153884 +0000 UTC m=+11.278972427" watchObservedRunningTime="2024-09-16 10:33:39.041866411 +0000 UTC m=+11.279684948"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.214934    2057 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357436    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/75558cac-fdbb-4a54-94c5-31f3f3d980c3-tmp\") pod \"storage-provisioner\" (UID: \"75558cac-fdbb-4a54-94c5-31f3f3d980c3\") " pod="kube-system/storage-provisioner"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357559    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume\") pod \"coredns-7c65d6cfc9-kmtsq\" (UID: \"535c3a5b-013f-4bdc-94c4-b109c6f43b0b\") " pod="kube-system/coredns-7c65d6cfc9-kmtsq"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357633    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkzf\" (UniqueName: \"kubernetes.io/projected/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-kube-api-access-phkzf\") pod \"coredns-7c65d6cfc9-kmtsq\" (UID: \"535c3a5b-013f-4bdc-94c4-b109c6f43b0b\") " pod="kube-system/coredns-7c65d6cfc9-kmtsq"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357666    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume\") pod \"coredns-7c65d6cfc9-jkv8h\" (UID: \"b174db06-0ade-4594-8b4b-6058301491f8\") " pod="kube-system/coredns-7c65d6cfc9-jkv8h"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357687    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jb5n\" (UniqueName: \"kubernetes.io/projected/b174db06-0ade-4594-8b4b-6058301491f8-kube-api-access-9jb5n\") pod \"coredns-7c65d6cfc9-jkv8h\" (UID: \"b174db06-0ade-4594-8b4b-6058301491f8\") " pod="kube-system/coredns-7c65d6cfc9-jkv8h"
	Sep 16 10:33:53 ha-632000 kubelet[2057]: I0916 10:33:53.357709    2057 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbqn\" (UniqueName: \"kubernetes.io/projected/75558cac-fdbb-4a54-94c5-31f3f3d980c3-kube-api-access-pvbqn\") pod \"storage-provisioner\" (UID: \"75558cac-fdbb-4a54-94c5-31f3f3d980c3\") " pod="kube-system/storage-provisioner"
	Sep 16 10:33:54 ha-632000 kubelet[2057]: I0916 10:33:54.160076    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podStartSLOduration=21.160061336 podStartE2EDuration="21.160061336s" podCreationTimestamp="2024-09-16 10:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:33:54.148085555 +0000 UTC m=+26.385904097" watchObservedRunningTime="2024-09-16 10:33:54.160061336 +0000 UTC m=+26.397879874"
	Sep 16 10:33:54 ha-632000 kubelet[2057]: I0916 10:33:54.192691    2057 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podStartSLOduration=21.192643097 podStartE2EDuration="21.192643097s" podCreationTimestamp="2024-09-16 10:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:33:54.174926711 +0000 UTC m=+26.412745253" watchObservedRunningTime="2024-09-16 10:33:54.192643097 +0000 UTC m=+26.430461634"
	
	
	==> storage-provisioner [b3d5179d6125] <==
	I0916 10:33:53.925247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:33:53.932000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:33:53.932238       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:33:53.952610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:33:53.952935       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-632000_27daa7da-1dfc-47e3-9900-caef0657c763!
	I0916 10:33:53.954583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"faa6a23c-3add-429a-8ab0-8b9c7c81cc28", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-632000_27daa7da-1dfc-47e3-9900-caef0657c763 became leader
	I0916 10:33:54.054153       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-632000_27daa7da-1dfc-47e3-9900-caef0657c763!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-632000 -n ha-632000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-632000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (8.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-632000 stop -v=7 --alsologtostderr: (8.392918173s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr: exit status 7 (69.105536ms)

                                                
                                                
-- stdout --
	ha-632000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:34:10.833441    3298 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:34:10.833706    3298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:34:10.833711    3298 out.go:358] Setting ErrFile to fd 2...
	I0916 03:34:10.833715    3298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:34:10.833910    3298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:34:10.834084    3298 out.go:352] Setting JSON to false
	I0916 03:34:10.834107    3298 mustload.go:65] Loading cluster: ha-632000
	I0916 03:34:10.834145    3298 notify.go:220] Checking for updates...
	I0916 03:34:10.834423    3298 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:34:10.834437    3298 status.go:255] checking status of ha-632000 ...
	I0916 03:34:10.834869    3298 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:10.834899    3298 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:10.843580    3298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51006
	I0916 03:34:10.843903    3298 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:10.844338    3298 main.go:141] libmachine: Using API Version  1
	I0916 03:34:10.844352    3298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:10.844565    3298 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:10.844721    3298 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:10.844820    3298 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:10.844876    3298 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:34:10.845787    3298 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 3227 missing from process table
	I0916 03:34:10.845832    3298 status.go:330] ha-632000 host status = "Stopped" (err=<nil>)
	I0916 03:34:10.845839    3298 status.go:343] host is not running, skipping remaining checks
	I0916 03:34:10.845844    3298 status.go:257] ha-632000 status: &{Name:ha-632000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 7 (68.085602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (8.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (63.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-632000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0916 03:34:20.311241    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-632000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (1m0.286625358s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-632000 status -v=7 --alsologtostderr": ha-632000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-632000 logs -n 25: (2.886057918s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  -- nslookup    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-632000 -v=7                | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node stop m02 -v=7         | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node start m02 -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000 -v=7               | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-632000 -v=7                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:32 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-632000 --wait=true -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:33 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	| node    | ha-632000 node delete m03 -v=7       | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-632000 stop -v=7                  | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:34 PDT | 16 Sep 24 03:34 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-632000 --wait=true             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:34 PDT | 16 Sep 24 03:35 PDT |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=hyperkit                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:34:10
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:34:10.968911    3304 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:34:10.969089    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:34:10.969094    3304 out.go:358] Setting ErrFile to fd 2...
	I0916 03:34:10.969098    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:34:10.969259    3304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:34:10.970749    3304 out.go:352] Setting JSON to false
	I0916 03:34:10.992630    3304 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2021,"bootTime":1726480829,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:34:10.992719    3304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:34:11.014288    3304 out.go:177] * [ha-632000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:34:11.056022    3304 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:34:11.056088    3304 notify.go:220] Checking for updates...
	I0916 03:34:11.100674    3304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:11.121937    3304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:34:11.142924    3304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:34:11.163772    3304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:34:11.184946    3304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:34:11.206305    3304 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:34:11.206798    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.206859    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:11.215855    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51012
	I0916 03:34:11.216221    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:11.216687    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:11.216709    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:11.216968    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:11.217090    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.217287    3304 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:34:11.217550    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.217574    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:11.225810    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51014
	I0916 03:34:11.226145    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:11.226497    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:11.226514    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:11.226715    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:11.226820    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.255869    3304 out.go:177] * Using the hyperkit driver based on existing profile
	I0916 03:34:11.297599    3304 start.go:297] selected driver: hyperkit
	I0916 03:34:11.297620    3304 start.go:901] validating driver "hyperkit" against &{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:34:11.297787    3304 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:34:11.297921    3304 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:34:11.298084    3304 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:34:11.306846    3304 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:34:11.310566    3304 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.310584    3304 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:34:11.313149    3304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:34:11.313185    3304 cni.go:84] Creating CNI manager for ""
	I0916 03:34:11.313229    3304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:34:11.313303    3304 start.go:340] cluster config:
	{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:34:11.313391    3304 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:34:11.355837    3304 out.go:177] * Starting "ha-632000" primary control-plane node in "ha-632000" cluster
	I0916 03:34:11.376857    3304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:34:11.376928    3304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:34:11.376948    3304 cache.go:56] Caching tarball of preloaded images
	I0916 03:34:11.377137    3304 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 03:34:11.377156    3304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:34:11.377298    3304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:34:11.378217    3304 start.go:360] acquireMachinesLock for ha-632000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:34:11.378346    3304 start.go:364] duration metric: took 103.654µs to acquireMachinesLock for "ha-632000"
	I0916 03:34:11.378382    3304 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:34:11.378401    3304 fix.go:54] fixHost starting: 
	I0916 03:34:11.378854    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.378911    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:11.388034    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51016
	I0916 03:34:11.388401    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:11.388702    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:11.388712    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:11.388935    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:11.389048    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.389151    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:11.389228    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:11.389304    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:34:11.390214    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 3227 missing from process table
	I0916 03:34:11.390244    3304 fix.go:112] recreateIfNeeded on ha-632000: state=Stopped err=<nil>
	I0916 03:34:11.390260    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	W0916 03:34:11.390342    3304 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:34:11.411956    3304 out.go:177] * Restarting existing hyperkit VM for "ha-632000" ...
	I0916 03:34:11.435771    3304 main.go:141] libmachine: (ha-632000) Calling .Start
	I0916 03:34:11.436068    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:11.436148    3304 main.go:141] libmachine: (ha-632000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid
	I0916 03:34:11.437908    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 3227 missing from process table
	I0916 03:34:11.437923    3304 main.go:141] libmachine: (ha-632000) DBG | pid 3227 is in state "Stopped"
	I0916 03:34:11.437944    3304 main.go:141] libmachine: (ha-632000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid...
	I0916 03:34:11.438138    3304 main.go:141] libmachine: (ha-632000) DBG | Using UUID 1510d94a-ca8f-47b3-af67-8767f9a495bd
	I0916 03:34:11.547334    3304 main.go:141] libmachine: (ha-632000) DBG | Generated MAC 1a:88:5b:ab:d5:de
	I0916 03:34:11.547372    3304 main.go:141] libmachine: (ha-632000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000
	I0916 03:34:11.547436    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:34:11.547476    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:34:11.547505    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1510d94a-ca8f-47b3-af67-8767f9a495bd", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"}
	I0916 03:34:11.547559    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1510d94a-ca8f-47b3-af67-8767f9a495bd -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"
	I0916 03:34:11.547576    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 03:34:11.549082    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Pid is 3317
	I0916 03:34:11.549513    3304 main.go:141] libmachine: (ha-632000) DBG | Attempt 0
	I0916 03:34:11.549523    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:11.549597    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:11.551382    3304 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:34:11.551471    3304 main.go:141] libmachine: (ha-632000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:34:11.551482    3304 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95ae1}
	I0916 03:34:11.551490    3304 main.go:141] libmachine: (ha-632000) DBG | Found match: 1a:88:5b:ab:d5:de
	I0916 03:34:11.551498    3304 main.go:141] libmachine: (ha-632000) DBG | IP: 192.169.0.5
	I0916 03:34:11.551594    3304 main.go:141] libmachine: (ha-632000) Calling .GetConfigRaw
	I0916 03:34:11.552255    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:11.552426    3304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:34:11.552991    3304 machine.go:93] provisionDockerMachine start ...
	I0916 03:34:11.553003    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.553145    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:11.553247    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:11.553374    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:11.553498    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:11.553613    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:11.553740    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:11.553943    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:11.553952    3304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 03:34:11.557339    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 03:34:11.610827    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 03:34:11.611542    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:34:11.611555    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:34:11.611562    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:34:11.611570    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:34:11.993592    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 03:34:11.993610    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 03:34:12.108247    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:34:12.108266    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:34:12.108276    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:34:12.108288    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:34:12.109170    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 03:34:12.109180    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 03:34:17.697882    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 03:34:17.697912    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 03:34:17.697923    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 03:34:17.721257    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 03:34:21.554022    3304 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0916 03:34:24.613753    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 03:34:24.613768    3304 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:34:24.613936    3304 buildroot.go:166] provisioning hostname "ha-632000"
	I0916 03:34:24.613947    3304 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:34:24.614064    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.614156    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.614278    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.614380    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.614465    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.614592    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.614740    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.614748    3304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-632000 && echo "ha-632000" | sudo tee /etc/hostname
	I0916 03:34:24.681698    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-632000
	
	I0916 03:34:24.681716    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.681845    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.681945    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.682035    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.682124    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.682269    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.682425    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.682436    3304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:34:24.745829    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:34:24.745851    3304 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 03:34:24.745869    3304 buildroot.go:174] setting up certificates
	I0916 03:34:24.745877    3304 provision.go:84] configureAuth start
	I0916 03:34:24.745886    3304 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:34:24.746023    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:24.746113    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.746208    3304 provision.go:143] copyHostCerts
	I0916 03:34:24.746243    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:34:24.746316    3304 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem, removing ...
	I0916 03:34:24.746324    3304 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:34:24.746479    3304 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 03:34:24.746699    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:34:24.746738    3304 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem, removing ...
	I0916 03:34:24.746743    3304 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:34:24.746868    3304 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 03:34:24.747030    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:34:24.747068    3304 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem, removing ...
	I0916 03:34:24.747073    3304 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:34:24.747148    3304 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 03:34:24.747332    3304 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.ha-632000 san=[127.0.0.1 192.169.0.5 ha-632000 localhost minikube]
	I0916 03:34:24.788917    3304 provision.go:177] copyRemoteCerts
	I0916 03:34:24.788977    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:34:24.788991    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.789102    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.789198    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.789301    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.789395    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:24.825705    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 03:34:24.825778    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 03:34:24.845370    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 03:34:24.845432    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 03:34:24.865178    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 03:34:24.865251    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 03:34:24.884831    3304 provision.go:87] duration metric: took 138.944021ms to configureAuth
	I0916 03:34:24.884843    3304 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:34:24.884967    3304 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:34:24.884979    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:24.885124    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.885220    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.885311    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.885386    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.885471    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.885588    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.885725    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.885733    3304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:34:24.943488    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:34:24.943500    3304 buildroot.go:70] root file system type: tmpfs
	I0916 03:34:24.943574    3304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:34:24.943589    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.943717    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.943811    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.943888    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.943974    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.944108    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.944249    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.944292    3304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:34:25.012349    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:34:25.012368    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:25.012501    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:25.012596    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:25.012680    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:25.012785    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:25.012929    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:25.013072    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:25.013083    3304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:34:26.666741    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:34:26.666755    3304 machine.go:96] duration metric: took 15.113997249s to provisionDockerMachine
	I0916 03:34:26.666768    3304 start.go:293] postStartSetup for "ha-632000" (driver="hyperkit")
	I0916 03:34:26.666776    3304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:34:26.666787    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.666978    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:34:26.666992    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.667087    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.667194    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.667273    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.667350    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:26.706592    3304 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:34:26.711498    3304 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:34:26.711513    3304 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 03:34:26.711618    3304 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 03:34:26.711797    3304 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> 16492.pem in /etc/ssl/certs
	I0916 03:34:26.711804    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /etc/ssl/certs/16492.pem
	I0916 03:34:26.712020    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 03:34:26.722253    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:34:26.749449    3304 start.go:296] duration metric: took 82.672854ms for postStartSetup
	I0916 03:34:26.749482    3304 fix.go:56] duration metric: took 15.371327135s for fixHost
	I0916 03:34:26.749494    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.749632    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.749729    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.749823    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.749915    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.750058    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:26.750196    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:26.750203    3304 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:34:26.808207    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482866.972905014
	
	I0916 03:34:26.808217    3304 fix.go:216] guest clock: 1726482866.972905014
	I0916 03:34:26.808223    3304 fix.go:229] Guest: 2024-09-16 03:34:26.972905014 -0700 PDT Remote: 2024-09-16 03:34:26.749485 -0700 PDT m=+15.815539300 (delta=223.420014ms)
	I0916 03:34:26.808242    3304 fix.go:200] guest clock delta is within tolerance: 223.420014ms
	I0916 03:34:26.808245    3304 start.go:83] releasing machines lock for "ha-632000", held for 15.430137129s
	I0916 03:34:26.808262    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.808401    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:26.808513    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.808823    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.808942    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.809043    3304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:34:26.809076    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.809101    3304 ssh_runner.go:195] Run: cat /version.json
	I0916 03:34:26.809110    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.809188    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.809211    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.809279    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.809305    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.809350    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.809401    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.809429    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:26.809486    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:26.841629    3304 ssh_runner.go:195] Run: systemctl --version
	I0916 03:34:26.888749    3304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:34:26.895634    3304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:34:26.895690    3304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:34:26.909307    3304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:34:26.909320    3304 start.go:495] detecting cgroup driver to use...
	I0916 03:34:26.909423    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:34:26.924212    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:34:26.933397    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:34:26.942270    3304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:34:26.942318    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:34:26.951238    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:34:26.960242    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:34:26.969261    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:34:26.978143    3304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:34:26.987134    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:34:26.996067    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:34:27.004962    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:34:27.013761    3304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:34:27.021787    3304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:34:27.029829    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:27.137695    3304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:34:27.155569    3304 start.go:495] detecting cgroup driver to use...
	I0916 03:34:27.155665    3304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:34:27.175875    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:34:27.186405    3304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:34:27.201501    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:34:27.212338    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:34:27.223624    3304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:34:27.245802    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:34:27.256277    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:34:27.271232    3304 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:34:27.274402    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:34:27.281587    3304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:34:27.300136    3304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:34:27.396868    3304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:34:27.504053    3304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:34:27.504125    3304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:34:27.518529    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:27.609526    3304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:34:29.935674    3304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.326163573s)
	I0916 03:34:29.935744    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 03:34:29.947246    3304 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 03:34:29.961201    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:34:29.971947    3304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 03:34:30.067117    3304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 03:34:30.160710    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:30.263470    3304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 03:34:30.277335    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:34:30.288368    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:30.383376    3304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 03:34:30.442288    3304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 03:34:30.442381    3304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 03:34:30.446794    3304 start.go:563] Will wait 60s for crictl version
	I0916 03:34:30.446848    3304 ssh_runner.go:195] Run: which crictl
	I0916 03:34:30.449770    3304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 03:34:30.480712    3304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 03:34:30.480801    3304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:34:30.506592    3304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:34:30.566841    3304 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 03:34:30.566887    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:30.567295    3304 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0916 03:34:30.572033    3304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:34:30.581666    3304 kubeadm.go:883] updating cluster {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 03:34:30.581730    3304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:34:30.581809    3304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:34:30.595489    3304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 03:34:30.595503    3304 docker.go:615] Images already preloaded, skipping extraction
	I0916 03:34:30.595591    3304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:34:30.607238    3304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 03:34:30.607254    3304 cache_images.go:84] Images are preloaded, skipping loading
	I0916 03:34:30.607262    3304 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0916 03:34:30.607335    3304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-632000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 03:34:30.607418    3304 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 03:34:30.640314    3304 cni.go:84] Creating CNI manager for ""
	I0916 03:34:30.640327    3304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:34:30.640333    3304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 03:34:30.640349    3304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-632000 NodeName:ha-632000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 03:34:30.640444    3304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-632000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 03:34:30.640518    3304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 03:34:30.648388    3304 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 03:34:30.648440    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 03:34:30.655689    3304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0916 03:34:30.669172    3304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 03:34:30.682497    3304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0916 03:34:30.696410    3304 ssh_runner.go:195] Run: grep 192.169.0.5	control-plane.minikube.internal$ /etc/hosts
	I0916 03:34:30.699419    3304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:34:30.708827    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:30.800708    3304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:34:30.815503    3304 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000 for IP: 192.169.0.5
	I0916 03:34:30.815516    3304 certs.go:194] generating shared ca certs ...
	I0916 03:34:30.815526    3304 certs.go:226] acquiring lock for ca certs: {Name:mkbe6743287669047aae1be1aaf47d24793fafa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:30.815719    3304 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key
	I0916 03:34:30.815802    3304 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key
	I0916 03:34:30.815814    3304 certs.go:256] generating profile certs ...
	I0916 03:34:30.815917    3304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key
	I0916 03:34:30.816003    3304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250
	I0916 03:34:30.816074    3304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key
	I0916 03:34:30.816082    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 03:34:30.816104    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 03:34:30.816121    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 03:34:30.816140    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 03:34:30.816156    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 03:34:30.816185    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 03:34:30.816213    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 03:34:30.816231    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 03:34:30.816330    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem (1338 bytes)
	W0916 03:34:30.816382    3304 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649_empty.pem, impossibly tiny 0 bytes
	I0916 03:34:30.816391    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 03:34:30.816426    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem (1082 bytes)
	I0916 03:34:30.816460    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem (1123 bytes)
	I0916 03:34:30.816489    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem (1675 bytes)
	I0916 03:34:30.816555    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:34:30.816589    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem -> /usr/share/ca-certificates/1649.pem
	I0916 03:34:30.816610    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /usr/share/ca-certificates/16492.pem
	I0916 03:34:30.816628    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:30.817136    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 03:34:30.847112    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 03:34:30.870611    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 03:34:30.894802    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 03:34:30.924616    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0916 03:34:30.952802    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 03:34:30.972374    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 03:34:30.992413    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 03:34:31.012243    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem --> /usr/share/ca-certificates/1649.pem (1338 bytes)
	I0916 03:34:31.032033    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /usr/share/ca-certificates/16492.pem (1708 bytes)
	I0916 03:34:31.051855    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 03:34:31.071449    3304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 03:34:31.085070    3304 ssh_runner.go:195] Run: openssl version
	I0916 03:34:31.089404    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 03:34:31.097650    3304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:31.101068    3304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:31.101109    3304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:31.105337    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 03:34:31.113599    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1649.pem && ln -fs /usr/share/ca-certificates/1649.pem /etc/ssl/certs/1649.pem"
	I0916 03:34:31.122322    3304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1649.pem
	I0916 03:34:31.126104    3304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:24 /usr/share/ca-certificates/1649.pem
	I0916 03:34:31.126171    3304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1649.pem
	I0916 03:34:31.130938    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1649.pem /etc/ssl/certs/51391683.0"
	I0916 03:34:31.139986    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16492.pem && ln -fs /usr/share/ca-certificates/16492.pem /etc/ssl/certs/16492.pem"
	I0916 03:34:31.149139    3304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16492.pem
	I0916 03:34:31.152974    3304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:24 /usr/share/ca-certificates/16492.pem
	I0916 03:34:31.153041    3304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16492.pem
	I0916 03:34:31.157722    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16492.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 03:34:31.166758    3304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 03:34:31.170375    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 03:34:31.174736    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 03:34:31.178948    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 03:34:31.183262    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 03:34:31.187460    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 03:34:31.191725    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 03:34:31.195983    3304 kubeadm.go:392] StartCluster: {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:34:31.196097    3304 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:34:31.208147    3304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 03:34:31.215656    3304 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 03:34:31.215667    3304 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 03:34:31.215706    3304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 03:34:31.223055    3304 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:34:31.223382    3304 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:31.223473    3304 kubeconfig.go:62] /Users/jenkins/minikube-integration/19651-1087/kubeconfig needs updating (will repair): [kubeconfig missing "ha-632000" cluster setting kubeconfig missing "ha-632000" context setting]
	I0916 03:34:31.223667    3304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/kubeconfig: {Name:mk8fc931c106eab82d9969ffc5e44e59dbd815a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:31.224198    3304 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:31.224396    3304 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x30dc720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:34:31.224736    3304 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 03:34:31.224912    3304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 03:34:31.232171    3304 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0916 03:34:31.232190    3304 kubeadm.go:1160] stopping kube-system containers ...
	I0916 03:34:31.232260    3304 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:34:31.247135    3304 docker.go:483] Stopping containers: [9cdebd7b24ee 1a9b8d16193d b3d5179d6125 943f7896df50 24a42374ff36 ea367824f743 da46c43d13fe fae437a3e240 b16c72338b6a 7d1ed68d9ed6 6dd1718ae6cb 69e9ce19f136 95857f7ef6fa 24a44bacb8b1 04ad9902d4d1 cf944230e79e 61ff2b466426 3a54b2c915f8]
	I0916 03:34:31.247235    3304 ssh_runner.go:195] Run: docker stop 9cdebd7b24ee 1a9b8d16193d b3d5179d6125 943f7896df50 24a42374ff36 ea367824f743 da46c43d13fe fae437a3e240 b16c72338b6a 7d1ed68d9ed6 6dd1718ae6cb 69e9ce19f136 95857f7ef6fa 24a44bacb8b1 04ad9902d4d1 cf944230e79e 61ff2b466426 3a54b2c915f8
	I0916 03:34:31.264064    3304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 03:34:31.275855    3304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 03:34:31.284502    3304 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 03:34:31.284510    3304 kubeadm.go:157] found existing configuration files:
	
	I0916 03:34:31.284556    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 03:34:31.291523    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 03:34:31.291565    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 03:34:31.298958    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 03:34:31.306045    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 03:34:31.306093    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 03:34:31.313603    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 03:34:31.320619    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 03:34:31.320661    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 03:34:31.328130    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 03:34:31.335290    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 03:34:31.335333    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 03:34:31.342670    3304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 03:34:31.350069    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:31.414838    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.543650    3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.128812527s)
	I0916 03:34:32.543673    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.699189    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.752257    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.803312    3304 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:34:32.803394    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:34:33.303990    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:34:33.803493    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:34:33.816653    3304 api_server.go:72] duration metric: took 1.013365866s to wait for apiserver process to appear ...
	I0916 03:34:33.816667    3304 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:34:33.816684    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:36.439673    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 03:34:36.439690    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 03:34:36.439698    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:36.462051    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 03:34:36.462067    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 03:34:36.816809    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:36.821366    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:34:36.821386    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:34:37.316789    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:37.325604    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:34:37.325621    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:34:37.817112    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:37.821324    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0916 03:34:37.821383    3304 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0916 03:34:37.821389    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:37.821396    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:37.821406    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:37.826801    3304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 03:34:37.826862    3304 api_server.go:141] control plane version: v1.31.1
	I0916 03:34:37.826872    3304 api_server.go:131] duration metric: took 4.010264636s to wait for apiserver health ...
	I0916 03:34:37.826879    3304 cni.go:84] Creating CNI manager for ""
	I0916 03:34:37.826883    3304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:34:37.851204    3304 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 03:34:37.871931    3304 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 03:34:37.877049    3304 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 03:34:37.877061    3304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 03:34:37.909782    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 03:34:38.392394    3304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:34:38.392445    3304 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 03:34:38.392455    3304 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 03:34:38.392506    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:34:38.392512    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.392517    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.392522    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.396201    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:38.399569    3304 system_pods.go:59] 9 kube-system pods found
	I0916 03:34:38.399589    3304 system_pods.go:61] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 03:34:38.399596    3304 system_pods.go:61] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 03:34:38.399602    3304 system_pods.go:61] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 03:34:38.399606    3304 system_pods.go:61] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:34:38.399611    3304 system_pods.go:61] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 03:34:38.399615    3304 system_pods.go:61] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 03:34:38.399618    3304 system_pods.go:61] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:34:38.399624    3304 system_pods.go:61] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 03:34:38.399627    3304 system_pods.go:61] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running
	I0916 03:34:38.399631    3304 system_pods.go:74] duration metric: took 7.226704ms to wait for pod list to return data ...
	I0916 03:34:38.399639    3304 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:34:38.399678    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0916 03:34:38.399683    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.399695    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.399700    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.407687    3304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 03:34:38.408404    3304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:34:38.408417    3304 node_conditions.go:123] node cpu capacity is 2
	I0916 03:34:38.408426    3304 node_conditions.go:105] duration metric: took 8.78368ms to run NodePressure ...
	I0916 03:34:38.408437    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:38.708991    3304 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 03:34:38.709059    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0916 03:34:38.709065    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.709071    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.709079    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.710702    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.711793    3304 kubeadm.go:739] kubelet initialised
	I0916 03:34:38.711802    3304 kubeadm.go:740] duration metric: took 2.798868ms waiting for restarted kubelet to initialise ...
	I0916 03:34:38.711809    3304 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:34:38.711861    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:34:38.711867    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.711873    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.711877    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.713532    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.715488    3304 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.715533    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:38.715538    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.715554    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.715559    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.716876    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.718440    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.718449    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.718454    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.718458    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.720116    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.720480    3304 pod_ready.go:98] node "ha-632000" hosting pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.720490    3304 pod_ready.go:82] duration metric: took 4.992557ms for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.720496    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.720501    3304 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.720529    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kmtsq
	I0916 03:34:38.720535    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.720540    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.720545    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.721899    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.722296    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.722303    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.722309    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.722311    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.723571    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.723979    3304 pod_ready.go:98] node "ha-632000" hosting pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.723988    3304 pod_ready.go:82] duration metric: took 3.482993ms for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.723994    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.723998    3304 pod_ready.go:79] waiting up to 4m0s for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.724022    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-632000
	I0916 03:34:38.724027    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.724032    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.724037    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.725189    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.725718    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.725725    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.725730    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.725734    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.726882    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.727191    3304 pod_ready.go:98] node "ha-632000" hosting pod "etcd-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.727201    3304 pod_ready.go:82] duration metric: took 3.198228ms for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.727206    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "etcd-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.727213    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.727242    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-632000
	I0916 03:34:38.727247    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.727252    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.727256    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.728489    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.793699    3304 request.go:632] Waited for 64.878296ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.793776    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.793786    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.793798    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.793825    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.796515    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:38.797110    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-apiserver-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.797125    3304 pod_ready.go:82] duration metric: took 69.907608ms for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.797133    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-apiserver-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.797139    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.993735    3304 request.go:632] Waited for 196.525215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:34:38.993809    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:34:38.993850    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.993865    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.993871    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.997843    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:39.193300    3304 request.go:632] Waited for 194.87785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.193380    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.193388    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.193397    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.193401    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.195202    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:39.195482    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-controller-manager-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.195492    3304 pod_ready.go:82] duration metric: took 398.353384ms for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:39.195500    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-controller-manager-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.195505    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:39.392617    3304 request.go:632] Waited for 197.072336ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:34:39.392682    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:34:39.392689    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.392697    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.392704    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.395196    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:39.593726    3304 request.go:632] Waited for 198.129679ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.593859    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.593865    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.593873    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.593878    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.595714    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:39.596185    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-proxy-g95xd" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.596200    3304 pod_ready.go:82] duration metric: took 400.697103ms for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:39.596209    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-proxy-g95xd" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.596215    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:39.792553    3304 request.go:632] Waited for 196.293737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:34:39.792591    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:34:39.792595    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.792601    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.792617    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.795184    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:39.993050    3304 request.go:632] Waited for 197.435612ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.993096    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.993104    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.993114    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.993122    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.996204    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:39.996546    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-scheduler-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.996566    3304 pod_ready.go:82] duration metric: took 400.341694ms for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:39.996575    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-scheduler-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.996582    3304 pod_ready.go:39] duration metric: took 1.284786546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:34:39.996594    3304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 03:34:40.010623    3304 ops.go:34] apiserver oom_adj: -16
	I0916 03:34:40.010643    3304 kubeadm.go:597] duration metric: took 8.795112796s to restartPrimaryControlPlane
	I0916 03:34:40.010652    3304 kubeadm.go:394] duration metric: took 8.814823261s to StartCluster
	I0916 03:34:40.010667    3304 settings.go:142] acquiring lock: {Name:mk3e31582aa73ce4c6a48b30dd26240f48d1f910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:40.010790    3304 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:40.011227    3304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/kubeconfig: {Name:mk8fc931c106eab82d9969ffc5e44e59dbd815a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:40.011508    3304 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:34:40.011531    3304 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 03:34:40.011584    3304 addons.go:69] Setting storage-provisioner=true in profile "ha-632000"
	I0916 03:34:40.011604    3304 addons.go:234] Setting addon storage-provisioner=true in "ha-632000"
	I0916 03:34:40.011608    3304 addons.go:69] Setting default-storageclass=true in profile "ha-632000"
	W0916 03:34:40.011621    3304 addons.go:243] addon storage-provisioner should already be in state true
	I0916 03:34:40.011628    3304 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:34:40.011632    3304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-632000"
	I0916 03:34:40.011649    3304 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:34:40.011914    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.011928    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.011936    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.011951    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.020914    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51044
	I0916 03:34:40.020919    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51045
	I0916 03:34:40.021274    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.021295    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.021593    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.021595    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.021606    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.021607    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.021814    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.021881    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.021929    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:40.022023    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:40.022096    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:40.022273    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.022303    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.024436    3304 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:40.024681    3304 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x30dc720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:34:40.024973    3304 addons.go:234] Setting addon default-storageclass=true in "ha-632000"
	W0916 03:34:40.024982    3304 addons.go:243] addon default-storageclass should already be in state true
	I0916 03:34:40.024997    3304 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:34:40.025213    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.025241    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.030857    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51048
	I0916 03:34:40.031184    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.031486    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.031503    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.031718    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.031837    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:40.031920    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:40.031987    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:40.032954    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:40.033423    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51050
	I0916 03:34:40.034570    3304 out.go:177] * Verifying Kubernetes components...
	I0916 03:34:40.035362    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.056602    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.056630    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.057113    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.057705    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.057742    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.066758    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51052
	I0916 03:34:40.067114    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.067418    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.067429    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.067620    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.067747    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:40.067836    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:40.067909    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:40.068849    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:40.068980    3304 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 03:34:40.068987    3304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 03:34:40.068995    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:40.069083    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:40.069170    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:40.069263    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:40.069351    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:40.092378    3304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 03:34:40.114656    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 03:34:40.129518    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:40.150612    3304 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:34:40.150628    3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 03:34:40.150652    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:40.150820    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:40.150919    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:40.151009    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:40.151105    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:40.312530    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.312544    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.312710    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.312718    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.312726    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.312730    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.312744    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.312859    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.312865    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.312868    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.312950    3304 round_trippers.go:463] GET https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 03:34:40.312957    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.312962    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.312967    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.314896    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.315270    3304 round_trippers.go:463] PUT https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 03:34:40.315277    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.315282    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.315286    3304 round_trippers.go:473]     Content-Type: application/json
	I0916 03:34:40.315288    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.317294    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.317394    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.317402    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.317552    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.317560    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.317571    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.365668    3304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:34:40.376406    3304 node_ready.go:35] waiting up to 6m0s for node "ha-632000" to be "Ready" ...
	I0916 03:34:40.376479    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:40.376485    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.376491    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.376494    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.378267    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.426079    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:34:40.876657    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:40.876672    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.876678    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.876681    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.878242    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.883063    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.883074    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.883237    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.883246    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.883255    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.883256    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.883283    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.883399    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.883408    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.883419    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.904880    3304 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 03:34:40.945708    3304 addons.go:510] duration metric: took 934.197445ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 03:34:41.376796    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:41.376817    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:41.376828    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:41.376835    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:41.379843    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:41.878402    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:41.878423    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:41.878434    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:41.878441    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:41.880784    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:42.377995    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:42.378017    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:42.378028    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:42.378036    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:42.380149    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:42.380565    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:42.877115    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:42.877132    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:42.877138    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:42.877142    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:42.878996    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:43.377951    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:43.377971    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:43.377979    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:43.377983    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:43.380950    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:43.877542    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:43.877568    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:43.877585    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:43.877597    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:43.880401    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:44.378673    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:44.378699    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:44.378711    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:44.378718    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:44.380901    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:44.381440    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:44.878590    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:44.878617    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:44.878627    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:44.878633    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:44.881276    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:45.376997    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:45.377021    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:45.377032    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:45.377038    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:45.379757    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:45.877711    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:45.877733    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:45.877744    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:45.877750    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:45.880462    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:46.378583    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:46.378609    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:46.378626    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:46.378635    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:46.381444    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:46.381801    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:46.877038    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:46.877050    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:46.877056    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:46.877059    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:46.878411    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:47.377110    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:47.377131    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:47.377143    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:47.377149    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:47.379987    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:47.877576    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:47.877593    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:47.877602    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:47.877609    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:47.879788    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:48.377874    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:48.377894    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:48.377903    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:48.377908    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:48.380037    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:48.877671    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:48.877726    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:48.877745    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:48.877755    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:48.880509    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:48.881074    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:49.378095    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:49.378122    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:49.378133    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:49.378142    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:49.381074    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:49.877863    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:49.877880    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:49.877889    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:49.877893    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:49.880159    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:50.377505    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:50.377516    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:50.377523    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:50.377526    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:50.379284    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:50.876843    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:50.876865    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:50.876876    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:50.876883    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:50.879406    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:51.378297    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:51.378323    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:51.378361    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:51.378375    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:51.381048    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:51.381405    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:51.876417    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:51.876433    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:51.876441    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:51.876445    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:51.878213    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:52.377976    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:52.377998    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:52.378009    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:52.378017    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:52.380591    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:52.876571    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:52.876592    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:52.876609    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:52.876615    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:52.879141    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:53.378471    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:53.378497    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:53.378509    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:53.378514    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:53.381242    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:53.381593    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:53.878495    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:53.878519    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:53.878531    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:53.878538    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:53.880732    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:54.376608    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:54.376631    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:54.376642    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:54.376649    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:54.379663    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:54.876659    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:54.876681    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:54.876694    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:54.876701    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:54.879185    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:55.376950    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:55.376977    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:55.376987    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:55.376993    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:55.379583    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:55.877176    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:55.877202    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:55.877214    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:55.877222    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:55.879951    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:55.880276    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:56.377034    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:56.377056    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:56.377068    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:56.377075    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:56.379858    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:56.876661    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:56.876682    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:56.876696    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:56.876710    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:56.879328    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:57.377125    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:57.377167    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.377177    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.377182    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.379516    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:57.379811    3304 node_ready.go:49] node "ha-632000" has status "Ready":"True"
	I0916 03:34:57.379823    3304 node_ready.go:38] duration metric: took 17.003672807s for node "ha-632000" to be "Ready" ...
	I0916 03:34:57.379829    3304 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:34:57.379871    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:34:57.379877    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.379882    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.379886    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.381644    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:57.383606    3304 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:57.383645    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:57.383650    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.383655    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.383658    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.384733    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:57.385106    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:57.385113    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.385118    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.385122    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.386159    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:57.884163    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:57.884182    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.884189    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.884195    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.886295    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:57.886956    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:57.886964    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.886970    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.886973    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.888563    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:58.384115    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:58.384137    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.384149    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.384155    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.387276    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:58.387749    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:58.387759    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.387767    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.387792    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.389133    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:58.885226    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:58.885246    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.885275    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.885281    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.886994    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:58.887489    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:58.887496    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.887502    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.887506    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.888603    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:59.385814    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:59.385840    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.385851    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.385858    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.388513    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:59.389033    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:59.389044    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.389051    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.389062    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.390522    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:59.391501    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:34:59.884866    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:59.884892    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.884903    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.884911    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.887993    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:59.888511    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:59.888521    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.888529    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.888533    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.891452    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:00.384323    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:00.384338    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.384347    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.384353    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.386438    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:00.386904    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:00.386911    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.386917    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.386920    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.388042    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:00.884219    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:00.884241    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.884260    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.884267    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.887069    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:00.887514    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:00.887523    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.887530    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.887535    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.888924    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:01.385822    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:01.385844    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.385855    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.385862    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.388461    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:01.388907    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:01.388916    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.388924    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.388933    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.390336    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:01.884266    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:01.884323    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.884336    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.884344    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.887031    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:01.887802    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:01.887813    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.887821    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.887826    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.889176    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:01.889480    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:02.384160    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:02.384181    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.384191    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.384197    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.387183    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:02.387689    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:02.387699    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.387707    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.387722    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.389179    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:02.885122    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:02.885146    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.885157    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.885165    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.887435    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:02.887961    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:02.887968    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.887974    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.887977    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.889091    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:03.385251    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:03.385279    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.385291    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.385298    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.387871    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:03.388816    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:03.388826    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.388834    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.388839    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.390130    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:03.884630    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:03.884656    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.884666    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.884673    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.887524    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:03.888016    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:03.888027    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.888033    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.888038    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.889721    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:03.890029    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:04.383979    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:04.384001    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.384013    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.384018    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.386805    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:04.387373    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:04.387384    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.387398    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.387405    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.388849    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:04.884523    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:04.884548    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.884560    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.884565    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.887300    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:04.888008    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:04.888019    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.888027    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.888032    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.889577    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:05.384175    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:05.384198    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.384215    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.384222    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.386418    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:05.387016    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:05.387024    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.387029    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.387034    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.388282    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:05.884900    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:05.884925    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.884937    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.884944    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.887666    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:05.888293    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:05.888304    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.888311    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.888317    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.889918    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:05.890216    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:06.385456    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:06.385472    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.385480    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.385486    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.387518    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:06.388054    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:06.388061    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.388068    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.388086    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.389437    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:06.884205    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:06.884230    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.884242    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.884247    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.887550    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:06.888131    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:06.888139    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.888145    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.888148    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.889317    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.383655    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:07.383672    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.383693    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.383698    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.385537    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.385926    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:07.385933    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.385940    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.385943    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.387166    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.885661    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:07.885688    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.885697    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.885702    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.888637    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:07.889170    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:07.889181    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.889189    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.889195    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.890632    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.891098    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:08.385687    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:08.385714    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.385725    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.385740    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.388506    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:08.388962    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:08.388972    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.388980    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.388998    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.390628    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:08.883835    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:08.883850    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.883857    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.883860    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.886160    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:08.886618    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:08.886626    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.886632    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.886635    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.887892    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.384848    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:09.384865    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.384873    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.384876    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.387304    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:09.387775    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.387783    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.387790    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.387793    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.389013    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.389277    3304 pod_ready.go:93] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.389286    3304 pod_ready.go:82] duration metric: took 12.005863458s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.389293    3304 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.389324    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kmtsq
	I0916 03:35:09.389330    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.389336    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.389339    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.390307    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:09.390713    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.390720    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.390725    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.390730    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.391675    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:09.391982    3304 pod_ready.go:93] pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.391991    3304 pod_ready.go:82] duration metric: took 2.694094ms for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.391997    3304 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.392029    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-632000
	I0916 03:35:09.392034    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.392040    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.392043    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.393164    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.393744    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.393751    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.393757    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.393761    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.394666    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:09.394943    3304 pod_ready.go:93] pod "etcd-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.394951    3304 pod_ready.go:82] duration metric: took 2.948803ms for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.394959    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.394992    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-632000
	I0916 03:35:09.394997    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.395003    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.395007    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.396122    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.396466    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.396473    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.396477    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.396480    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.397544    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.397850    3304 pod_ready.go:93] pod "kube-apiserver-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.397857    3304 pod_ready.go:82] duration metric: took 2.893653ms for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.397863    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.397889    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:35:09.397894    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.397900    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.397905    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.399129    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.399515    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.399522    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.399527    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.399530    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.400631    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.400898    3304 pod_ready.go:93] pod "kube-controller-manager-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.400906    3304 pod_ready.go:82] duration metric: took 3.037606ms for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.400918    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.585908    3304 request.go:632] Waited for 184.827903ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:35:09.585952    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:35:09.585960    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.585968    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.585976    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.588274    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:09.786885    3304 request.go:632] Waited for 198.109185ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.787049    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.787060    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.787071    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.787079    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.789763    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:09.790084    3304 pod_ready.go:93] pod "kube-proxy-g95xd" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.790096    3304 pod_ready.go:82] duration metric: took 389.178105ms for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.790106    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.986947    3304 request.go:632] Waited for 196.780374ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:35:09.987087    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:35:09.987098    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.987109    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.987122    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.989733    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:10.186939    3304 request.go:632] Waited for 196.171001ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:10.187059    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:10.187069    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.187089    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.187095    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.189807    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:10.190399    3304 pod_ready.go:93] pod "kube-scheduler-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:10.190418    3304 pod_ready.go:82] duration metric: took 400.304067ms for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:10.190428    3304 pod_ready.go:39] duration metric: took 12.810794631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:35:10.190446    3304 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:35:10.190528    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:35:10.203518    3304 api_server.go:72] duration metric: took 30.192477873s to wait for apiserver process to appear ...
	I0916 03:35:10.203529    3304 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:35:10.203539    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:35:10.206990    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0916 03:35:10.207032    3304 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0916 03:35:10.207038    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.207044    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.207049    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.207501    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:10.207560    3304 api_server.go:141] control plane version: v1.31.1
	I0916 03:35:10.207569    3304 api_server.go:131] duration metric: took 4.036294ms to wait for apiserver health ...
	I0916 03:35:10.207574    3304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:35:10.385757    3304 request.go:632] Waited for 178.107675ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.385849    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.385859    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.385869    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.385875    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.389657    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:10.392081    3304 system_pods.go:59] 9 kube-system pods found
	I0916 03:35:10.392092    3304 system_pods.go:61] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:35:10.392096    3304 system_pods.go:61] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:35:10.392098    3304 system_pods.go:61] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:35:10.392101    3304 system_pods.go:61] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:35:10.392103    3304 system_pods.go:61] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:35:10.392106    3304 system_pods.go:61] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:35:10.392109    3304 system_pods.go:61] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:35:10.392113    3304 system_pods.go:61] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:35:10.392116    3304 system_pods.go:61] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 03:35:10.392122    3304 system_pods.go:74] duration metric: took 184.547797ms to wait for pod list to return data ...
	I0916 03:35:10.392132    3304 default_sa.go:34] waiting for default service account to be created ...
	I0916 03:35:10.585358    3304 request.go:632] Waited for 193.141273ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:35:10.585408    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:35:10.585416    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.585427    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.585471    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.588788    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:10.588946    3304 default_sa.go:45] found service account: "default"
	I0916 03:35:10.588958    3304 default_sa.go:55] duration metric: took 196.824843ms for default service account to be created ...
	I0916 03:35:10.588968    3304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 03:35:10.785029    3304 request.go:632] Waited for 196.001491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.785170    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.785182    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.785198    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.785205    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.788727    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:10.790881    3304 system_pods.go:86] 9 kube-system pods found
	I0916 03:35:10.790911    3304 system_pods.go:89] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:35:10.790916    3304 system_pods.go:89] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:35:10.790919    3304 system_pods.go:89] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:35:10.790922    3304 system_pods.go:89] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:35:10.790924    3304 system_pods.go:89] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:35:10.790927    3304 system_pods.go:89] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:35:10.790935    3304 system_pods.go:89] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:35:10.790939    3304 system_pods.go:89] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:35:10.790945    3304 system_pods.go:89] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 03:35:10.790951    3304 system_pods.go:126] duration metric: took 201.981143ms to wait for k8s-apps to be running ...
	I0916 03:35:10.790964    3304 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 03:35:10.791037    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:35:10.804245    3304 system_svc.go:56] duration metric: took 13.277206ms WaitForService to wait for kubelet
	I0916 03:35:10.804260    3304 kubeadm.go:582] duration metric: took 30.793230339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:35:10.804274    3304 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:35:10.985571    3304 request.go:632] Waited for 181.22969ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0916 03:35:10.985617    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0916 03:35:10.985625    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.985636    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.985645    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.988104    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:10.988460    3304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:35:10.988470    3304 node_conditions.go:123] node cpu capacity is 2
	I0916 03:35:10.988475    3304 node_conditions.go:105] duration metric: took 184.185428ms to run NodePressure ...
	I0916 03:35:10.988484    3304 start.go:241] waiting for startup goroutines ...
	I0916 03:35:10.988489    3304 start.go:246] waiting for cluster config update ...
	I0916 03:35:10.988495    3304 start.go:255] writing updated cluster config ...
	I0916 03:35:10.989154    3304 ssh_runner.go:195] Run: rm -f paused
	I0916 03:35:11.029032    3304 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0916 03:35:11.049786    3304 out.go:201] 
	W0916 03:35:11.070937    3304 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0916 03:35:11.091559    3304 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0916 03:35:11.134086    3304 out.go:177] * Done! kubectl is now configured to use "ha-632000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 10:34:37 ha-632000 dockerd[928]: time="2024-09-16T10:34:37.593663222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:34:37 ha-632000 dockerd[928]: time="2024-09-16T10:34:37.593673564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:34:37 ha-632000 dockerd[928]: time="2024-09-16T10:34:37.593881815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:07 ha-632000 dockerd[922]: time="2024-09-16T10:35:07.563121477Z" level=info msg="ignoring event" container=d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:35:07 ha-632000 dockerd[928]: time="2024-09-16T10:35:07.563622963Z" level=info msg="shim disconnected" id=d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba namespace=moby
	Sep 16 10:35:07 ha-632000 dockerd[928]: time="2024-09-16T10:35:07.563954778Z" level=warning msg="cleaning up after shim disconnected" id=d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba namespace=moby
	Sep 16 10:35:07 ha-632000 dockerd[928]: time="2024-09-16T10:35:07.563989325Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865227113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865264506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865295808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865376056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:08 ha-632000 cri-dockerd[1176]: time="2024-09-16T10:35:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae26258aa302306594e1686d25b0188561b24033ecc3092a2b33b8cec7c9026a/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051481955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051545488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051558461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051819249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083133726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083198092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083207691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083269287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 cri-dockerd[1176]: time="2024-09-16T10:35:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0bec5171951368b86ff04b5574c9c5cdfeb4cdd6eacba6ff4068daf99eca6536/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300265928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300536225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300575094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300839535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3cf37607dad84       c69fa2e9cbf5f                                                                              3 seconds ago        Running             coredns                   1                   0bec517195136       coredns-7c65d6cfc9-jkv8h
	bdd5bdc32cf5f       c69fa2e9cbf5f                                                                              3 seconds ago        Running             coredns                   1                   ae26258aa3023       coredns-7c65d6cfc9-kmtsq
	afaf715ba2192       12968670680f4                                                                              35 seconds ago       Running             kindnet-cni               1                   1d37d305e287a       kindnet-gs2gm
	35d5611bea628       60c005f310ff3                                                                              35 seconds ago       Running             kube-proxy                1                   2887b4fd64c30       kube-proxy-g95xd
	d6e8db98c320f       6e38f40d628db                                                                              35 seconds ago       Exited              storage-provisioner       1                   b4724bae016fb       storage-provisioner
	8e82c5b6f1445       9aa1fad941575                                                                              39 seconds ago       Running             kube-scheduler            1                   676f5e9f1e9a8       kube-scheduler-ha-632000
	b75c9f7049a91       175ffd71cce3d                                                                              39 seconds ago       Running             kube-controller-manager   1                   f69748e5377e8       kube-controller-manager-ha-632000
	69c5f26bd56fc       6bab7719df100                                                                              39 seconds ago       Running             kube-apiserver            1                   80bc5bb77c1d2       kube-apiserver-ha-632000
	8977e4d70f2ee       2e96e5913fc06                                                                              39 seconds ago       Running             etcd                      1                   650dad7d2a118       etcd-ha-632000
	9cdebd7b24ee9       c69fa2e9cbf5f                                                                              About a minute ago   Exited              coredns                   0                   943f7896df505       coredns-7c65d6cfc9-jkv8h
	1a9b8d16193d8       c69fa2e9cbf5f                                                                              About a minute ago   Exited              coredns                   0                   ea367824f7436       coredns-7c65d6cfc9-kmtsq
	da46c43d13fe8       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166   About a minute ago   Exited              kindnet-cni               0                   b16c72338b6ac       kindnet-gs2gm
	fae437a3e2407       60c005f310ff3                                                                              About a minute ago   Exited              kube-proxy                0                   7d1ed68d9ed61       kube-proxy-g95xd
	6dd1718ae6cb7       2e96e5913fc06                                                                              About a minute ago   Exited              etcd                      0                   61ff2b4664268       etcd-ha-632000
	69e9ce19f1363       9aa1fad941575                                                                              About a minute ago   Exited              kube-scheduler            0                   cf944230e79eb       kube-scheduler-ha-632000
	95857f7ef6fa1       6bab7719df100                                                                              About a minute ago   Exited              kube-apiserver            0                   3a54b2c915f89       kube-apiserver-ha-632000
	24a44bacb8b1e       175ffd71cce3d                                                                              About a minute ago   Exited              kube-controller-manager   0                   04ad9902d4d19       kube-controller-manager-ha-632000
	
	
	==> coredns [1a9b8d16193d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3cf37607dad8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [9cdebd7b24ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bdd5bdc32cf5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               ha-632000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-632000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-632000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T03_33_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-632000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:35:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:34:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-632000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 80143f0d20d747b59263aea5212f463f
	  System UUID:                151047b3-0000-0000-af67-8767f9a495bd
	  Boot ID:                    ef36167c-8045-4347-ac98-d12c1081351a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jkv8h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     100s
	  kube-system                 coredns-7c65d6cfc9-kmtsq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     100s
	  kube-system                 etcd-ha-632000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         106s
	  kube-system                 kindnet-gs2gm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      101s
	  kube-system                 kube-apiserver-ha-632000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-ha-632000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-g95xd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-ha-632000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 98s                kube-proxy       
	  Normal  Starting                 35s                kube-proxy       
	  Normal  Starting                 106s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  106s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     105s               kubelet          Node ha-632000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    105s               kubelet          Node ha-632000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  105s               kubelet          Node ha-632000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           101s               node-controller  Node ha-632000 event: Registered Node ha-632000 in Controller
	  Normal  NodeReady                80s                kubelet          Node ha-632000 status is now: NodeReady
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node ha-632000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node ha-632000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x7 over 40s)  kubelet          Node ha-632000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                node-controller  Node ha-632000 event: Registered Node ha-632000 in Controller
	
	
	==> dmesg <==
	[  +0.008129] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691988] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006862] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.690338] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.223873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.475205] systemd-fstab-generator[469]: Ignoring "noauto" option for root device
	[  +0.095359] systemd-fstab-generator[481]: Ignoring "noauto" option for root device
	[  +1.258412] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.597885] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[  +0.270736] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.094069] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.123685] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +2.457317] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.083989] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.111440] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.118119] systemd-fstab-generator[1168]: Ignoring "noauto" option for root device
	[  +0.416274] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +1.886558] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +0.056602] kauditd_printk_skb: 261 callbacks suppressed
	[  +5.074379] kauditd_printk_skb: 95 callbacks suppressed
	[  +2.468516] systemd-fstab-generator[2276]: Ignoring "noauto" option for root device
	[Sep16 10:35] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [6dd1718ae6cb] <==
	{"level":"info","ts":"2024-09-16T10:33:24.707652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.712635Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-632000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:24.712748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.714615Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.715358Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.717393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.717649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.717680Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.718273Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.719184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-09-16T10:33:24.719319Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719511Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.723827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:03.002942Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:03.002999Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ha-632000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:03.003078Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:03.003173Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:03.045479Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:03.045527Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:03.045562Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-09-16T10:34:03.046683Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:03.046742Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:03.046750Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-632000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [8977e4d70f2e] <==
	{"level":"info","ts":"2024-09-16T10:34:33.842210Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:33.845570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860)"}
	{"level":"info","ts":"2024-09-16T10:34:33.845640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"b8c6c7563d17d844","added-peer-peer-urls":["https://192.169.0.5:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:33.845775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:33.845847Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:33.849233Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:33.849265Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:33.849871Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b8c6c7563d17d844","initial-advertise-peer-urls":["https://192.169.0.5:2380"],"listen-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:33.849948Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.728265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:35.728302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:35.728333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:35.728405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.728497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.728626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.728644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.730226Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-632000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:35.730416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:35.730682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:35.731522Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:35.731693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:35.732169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.732170Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.732813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-09-16T10:34:35.733036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:35:13 up 1 min,  0 users,  load average: 0.59, 0.19, 0.07
	Linux ha-632000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [afaf715ba219] <==
	I0916 10:34:37.992823       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:34:37.996117       1 main.go:139] hostIP = 192.169.0.5
	podIP = 192.169.0.5
	I0916 10:34:37.996293       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:34:37.996324       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:34:37.996333       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:34:38.586024       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0916 10:34:48.586883       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:34:48.587014       1 main.go:299] handling current node
	I0916 10:34:58.588144       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:34:58.588203       1 main.go:299] handling current node
	I0916 10:35:08.587084       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:35:08.587455       1 main.go:299] handling current node
	
	
	==> kindnet [da46c43d13fe] <==
	I0916 10:33:38.105068       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:38.105822       1 main.go:139] hostIP = 192.169.0.5
	podIP = 192.169.0.5
	I0916 10:33:38.105896       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:38.105923       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:38.105937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:38.701206       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0916 10:33:48.705636       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:33:48.705736       1 main.go:299] handling current node
	I0916 10:33:58.706377       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:33:58.706440       1 main.go:299] handling current node
	
	
	==> kube-apiserver [69c5f26bd56f] <==
	I0916 10:34:36.621882       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:36.621893       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:36.621897       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:36.670791       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:34:36.671289       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:34:36.671573       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:34:36.672001       1 policy_source.go:224] refreshing policies
	I0916 10:34:36.670914       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:36.672338       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:36.672470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:36.672555       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:36.672568       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:36.672982       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0916 10:34:36.676469       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:36.677144       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:34:36.724659       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:36.740613       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:37.574461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:38.554987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:38.686490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:38.697546       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:38.869795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:38.876774       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:34:40.310037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:34:40.380264       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [95857f7ef6fa] <==
	W0916 10:34:03.020165       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020189       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020209       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020256       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020305       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020352       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020374       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020395       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020441       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020489       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020510       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020562       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020590       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020643       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020679       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020730       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020782       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020814       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020842       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020893       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020948       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.023259       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.023329       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.023360       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.029700       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [24a44bacb8b1] <==
	I0916 10:33:32.792285       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.813766       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.836606       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:33:32.836825       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:33:33.208030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283196       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283305       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:33.489386       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:33.785544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="393.158805ms"
	I0916 10:33:33.795508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.792999ms"
	I0916 10:33:33.796330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.465µs"
	I0916 10:33:33.796593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.299µs"
	I0916 10:33:53.221882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.228026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.232440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.065µs"
	I0916 10:33:53.240523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.002µs"
	I0916 10:33:53.245409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.983µs"
	I0916 10:33:53.253146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="32.846µs"
	I0916 10:33:54.145943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.356µs"
	I0916 10:33:54.168878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.87555ms"
	I0916 10:33:54.169182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.016µs"
	I0916 10:33:54.179810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.838071ms"
	I0916 10:33:54.180086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.84µs"
	I0916 10:33:57.576580       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 10:33:58.801252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	
	
	==> kube-controller-manager [b75c9f7049a9] <==
	I0916 10:34:39.965965       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:34:39.966418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:34:39.969054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.999421ms"
	I0916 10:34:39.969135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.787µs"
	I0916 10:34:39.989530       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:34:40.011735       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:34:40.044676       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:34:40.057740       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:34:40.062365       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:34:40.071587       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:40.095503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:34:40.139376       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:40.186827       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:34:40.589219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:40.667214       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:40.667250       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:34:57.405752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:34:57.412393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:35:00.117952       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 10:35:00.118253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:35:09.452642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.695µs"
	I0916 10:35:09.486605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.460496ms"
	I0916 10:35:09.486799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="85.542µs"
	I0916 10:35:09.494578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.669414ms"
	I0916 10:35:09.495670       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.343µs"
	
	
	==> kube-proxy [35d5611bea62] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:34:37.766945       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:34:37.780385       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0916 10:34:37.780815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:37.812489       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:34:37.812528       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:34:37.812545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:37.814902       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:37.815438       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:37.815466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.817720       1 config.go:199] "Starting service config controller"
	I0916 10:34:37.817748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:37.817763       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:37.817766       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:37.818278       1 config.go:328] "Starting node config controller"
	I0916 10:34:37.818305       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:37.918323       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:34:37.918683       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:37.919059       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fae437a3e240] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:33:34.691611       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:33:34.697203       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0916 10:33:34.697256       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:34.722746       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:33:34.723060       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:33:34.723106       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:34.725578       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:34.725970       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:34.725998       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:34.726895       1 config.go:199] "Starting service config controller"
	I0916 10:33:34.726932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:34.726949       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:34.726952       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:34.727508       1 config.go:328] "Starting node config controller"
	I0916 10:33:34.727534       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:34.827447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:34.827457       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:34.827564       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69e9ce19f136] <==
	E0916 10:33:25.865183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.862775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:25.865397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:33:25.865433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:25.865448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.708747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:33:26.708911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.709690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:26.709836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.726430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:33:26.726462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.767336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:26.767420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.769370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:26.769455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.794289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:33:26.794346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:33:27.460783       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:34:02.987165       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8e82c5b6f144] <==
	W0916 10:34:36.639348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:34:36.639411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.639520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:34:36.639583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.639682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:34:36.639715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.639882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:34:36.640007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.640239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:34:36.640271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.640398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:34:36.640470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.640643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:34:36.640675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:34:36.641643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:34:36.641661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:34:36.641812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:34:36.641833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.643637       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:34:36.643672       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:34:38.124778       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:41 ha-632000 kubelet[1437]: E0916 10:34:41.998266    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:42 ha-632000 kubelet[1437]: E0916 10:34:42.998903    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:43 ha-632000 kubelet[1437]: E0916 10:34:43.030466    1437 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Sep 16 10:34:43 ha-632000 kubelet[1437]: E0916 10:34:43.999614    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618095    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618289    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618332    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume podName:b174db06-0ade-4594-8b4b-6058301491f8 nodeName:}" failed. No retries permitted until 2024-09-16 10:34:52.61831104 +0000 UTC m=+19.750575570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume") pod "coredns-7c65d6cfc9-jkv8h" (UID: "b174db06-0ade-4594-8b4b-6058301491f8") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618540    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume podName:535c3a5b-013f-4bdc-94c4-b109c6f43b0b nodeName:}" failed. No retries permitted until 2024-09-16 10:34:52.618517321 +0000 UTC m=+19.750781848 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume") pod "coredns-7c65d6cfc9-kmtsq" (UID: "535c3a5b-013f-4bdc-94c4-b109c6f43b0b") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.998247    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:46 ha-632000 kubelet[1437]: E0916 10:34:46.000383    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:46 ha-632000 kubelet[1437]: E0916 10:34:46.997866    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:47 ha-632000 kubelet[1437]: E0916 10:34:47.997933    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:48 ha-632000 kubelet[1437]: E0916 10:34:48.034733    1437 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Sep 16 10:34:48 ha-632000 kubelet[1437]: E0916 10:34:48.997973    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:49 ha-632000 kubelet[1437]: E0916 10:34:49.997909    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:50 ha-632000 kubelet[1437]: E0916 10:34:50.998946    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:51 ha-632000 kubelet[1437]: E0916 10:34:51.999476    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705368    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705538    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume podName:535c3a5b-013f-4bdc-94c4-b109c6f43b0b nodeName:}" failed. No retries permitted until 2024-09-16 10:35:08.705508953 +0000 UTC m=+35.837773486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume") pod "coredns-7c65d6cfc9-kmtsq" (UID: "535c3a5b-013f-4bdc-94c4-b109c6f43b0b") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705715    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705923    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume podName:b174db06-0ade-4594-8b4b-6058301491f8 nodeName:}" failed. No retries permitted until 2024-09-16 10:35:08.705902918 +0000 UTC m=+35.838167449 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume") pod "coredns-7c65d6cfc9-jkv8h" (UID: "b174db06-0ade-4594-8b4b-6058301491f8") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.998708    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:35:08 ha-632000 kubelet[1437]: I0916 10:35:08.423160    1437 scope.go:117] "RemoveContainer" containerID="b3d5179d61251454ae7a85db0a0429a411cae76733938adfbbf5ba05e89d0a44"
	Sep 16 10:35:08 ha-632000 kubelet[1437]: I0916 10:35:08.423361    1437 scope.go:117] "RemoveContainer" containerID="d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba"
	Sep 16 10:35:08 ha-632000 kubelet[1437]: E0916 10:35:08.423439    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75558cac-fdbb-4a54-94c5-31f3f3d980c3)\"" pod="kube-system/storage-provisioner" podUID="75558cac-fdbb-4a54-94c5-31f3f3d980c3"
	
	
	==> storage-provisioner [d6e8db98c320] <==
	I0916 10:34:37.541990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:35:07.552356       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-632000 -n ha-632000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-632000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (63.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-632000" in json of 'profile list' to have "Degraded" status but have "Running" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Running\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersi
on\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\"
:\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-632000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-632000 logs -n 25: (3.025061946s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:30 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  --             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- exec  -- nslookup    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-632000 -- get pods -o          | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| node    | add -p ha-632000 -v=7                | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node stop m02 -v=7         | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | ha-632000 node start m02 -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:31 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000 -v=7               | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | -p ha-632000 -v=7                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:32 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-632000 --wait=true -v=7        | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:32 PDT | 16 Sep 24 03:33 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| node    | list -p ha-632000                    | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	| node    | ha-632000 node delete m03 -v=7       | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:33 PDT |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| stop    | ha-632000 stop -v=7                  | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:34 PDT | 16 Sep 24 03:34 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	| start   | -p ha-632000 --wait=true             | ha-632000 | jenkins | v1.34.0 | 16 Sep 24 03:34 PDT | 16 Sep 24 03:35 PDT |
	|         | -v=7 --alsologtostderr               |           |         |         |                     |                     |
	|         | --driver=hyperkit                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:34:10
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:34:10.968911    3304 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:34:10.969089    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:34:10.969094    3304 out.go:358] Setting ErrFile to fd 2...
	I0916 03:34:10.969098    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:34:10.969259    3304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:34:10.970749    3304 out.go:352] Setting JSON to false
	I0916 03:34:10.992630    3304 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2021,"bootTime":1726480829,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:34:10.992719    3304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:34:11.014288    3304 out.go:177] * [ha-632000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:34:11.056022    3304 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:34:11.056088    3304 notify.go:220] Checking for updates...
	I0916 03:34:11.100674    3304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:11.121937    3304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:34:11.142924    3304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:34:11.163772    3304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:34:11.184946    3304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:34:11.206305    3304 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:34:11.206798    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.206859    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:11.215855    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51012
	I0916 03:34:11.216221    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:11.216687    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:11.216709    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:11.216968    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:11.217090    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.217287    3304 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:34:11.217550    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.217574    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:11.225810    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51014
	I0916 03:34:11.226145    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:11.226497    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:11.226514    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:11.226715    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:11.226820    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.255869    3304 out.go:177] * Using the hyperkit driver based on existing profile
	I0916 03:34:11.297599    3304 start.go:297] selected driver: hyperkit
	I0916 03:34:11.297620    3304 start.go:901] validating driver "hyperkit" against &{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:34:11.297787    3304 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:34:11.297921    3304 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:34:11.298084    3304 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:34:11.306846    3304 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:34:11.310566    3304 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.310584    3304 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:34:11.313149    3304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:34:11.313185    3304 cni.go:84] Creating CNI manager for ""
	I0916 03:34:11.313229    3304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:34:11.313303    3304 start.go:340] cluster config:
	{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:34:11.313391    3304 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:34:11.355837    3304 out.go:177] * Starting "ha-632000" primary control-plane node in "ha-632000" cluster
	I0916 03:34:11.376857    3304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:34:11.376928    3304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:34:11.376948    3304 cache.go:56] Caching tarball of preloaded images
	I0916 03:34:11.377137    3304 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 03:34:11.377156    3304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:34:11.377298    3304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:34:11.378217    3304 start.go:360] acquireMachinesLock for ha-632000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:34:11.378346    3304 start.go:364] duration metric: took 103.654µs to acquireMachinesLock for "ha-632000"
	I0916 03:34:11.378382    3304 start.go:96] Skipping create...Using existing machine configuration
	I0916 03:34:11.378401    3304 fix.go:54] fixHost starting: 
	I0916 03:34:11.378854    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:11.378911    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:11.388034    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51016
	I0916 03:34:11.388401    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:11.388702    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:11.388712    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:11.388935    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:11.389048    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.389151    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:11.389228    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:11.389304    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3227
	I0916 03:34:11.390214    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 3227 missing from process table
	I0916 03:34:11.390244    3304 fix.go:112] recreateIfNeeded on ha-632000: state=Stopped err=<nil>
	I0916 03:34:11.390260    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	W0916 03:34:11.390342    3304 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 03:34:11.411956    3304 out.go:177] * Restarting existing hyperkit VM for "ha-632000" ...
	I0916 03:34:11.435771    3304 main.go:141] libmachine: (ha-632000) Calling .Start
	I0916 03:34:11.436068    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:11.436148    3304 main.go:141] libmachine: (ha-632000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid
	I0916 03:34:11.437908    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid 3227 missing from process table
	I0916 03:34:11.437923    3304 main.go:141] libmachine: (ha-632000) DBG | pid 3227 is in state "Stopped"
	I0916 03:34:11.437944    3304 main.go:141] libmachine: (ha-632000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid...
	I0916 03:34:11.438138    3304 main.go:141] libmachine: (ha-632000) DBG | Using UUID 1510d94a-ca8f-47b3-af67-8767f9a495bd
	I0916 03:34:11.547334    3304 main.go:141] libmachine: (ha-632000) DBG | Generated MAC 1a:88:5b:ab:d5:de
	I0916 03:34:11.547372    3304 main.go:141] libmachine: (ha-632000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000
	I0916 03:34:11.547436    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:34:11.547476    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1510d94a-ca8f-47b3-af67-8767f9a495bd", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:34:11.547505    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1510d94a-ca8f-47b3-af67-8767f9a495bd", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"}
	I0916 03:34:11.547559    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1510d94a-ca8f-47b3-af67-8767f9a495bd -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/ha-632000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"
	I0916 03:34:11.547576    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 03:34:11.549082    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 DEBUG: hyperkit: Pid is 3317
	I0916 03:34:11.549513    3304 main.go:141] libmachine: (ha-632000) DBG | Attempt 0
	I0916 03:34:11.549523    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:11.549597    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:11.551382    3304 main.go:141] libmachine: (ha-632000) DBG | Searching for 1a:88:5b:ab:d5:de in /var/db/dhcpd_leases ...
	I0916 03:34:11.551471    3304 main.go:141] libmachine: (ha-632000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:34:11.551482    3304 main.go:141] libmachine: (ha-632000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95ae1}
	I0916 03:34:11.551490    3304 main.go:141] libmachine: (ha-632000) DBG | Found match: 1a:88:5b:ab:d5:de
	I0916 03:34:11.551498    3304 main.go:141] libmachine: (ha-632000) DBG | IP: 192.169.0.5
	I0916 03:34:11.551594    3304 main.go:141] libmachine: (ha-632000) Calling .GetConfigRaw
	I0916 03:34:11.552255    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:11.552426    3304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:34:11.552991    3304 machine.go:93] provisionDockerMachine start ...
	I0916 03:34:11.553003    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:11.553145    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:11.553247    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:11.553374    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:11.553498    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:11.553613    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:11.553740    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:11.553943    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:11.553952    3304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 03:34:11.557339    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 03:34:11.610827    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 03:34:11.611542    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:34:11.611555    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:34:11.611562    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:34:11.611570    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:34:11.993592    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 03:34:11.993610    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 03:34:12.108247    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:34:12.108266    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:34:12.108276    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:34:12.108288    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:34:12.109170    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 03:34:12.109180    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 03:34:17.697882    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 03:34:17.697912    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 03:34:17.697923    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 03:34:17.721257    3304 main.go:141] libmachine: (ha-632000) DBG | 2024/09/16 03:34:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 03:34:21.554022    3304 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.5:22: connect: connection refused
	I0916 03:34:24.613753    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0916 03:34:24.613768    3304 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:34:24.613936    3304 buildroot.go:166] provisioning hostname "ha-632000"
	I0916 03:34:24.613947    3304 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:34:24.614064    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.614156    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.614278    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.614380    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.614465    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.614592    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.614740    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.614748    3304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-632000 && echo "ha-632000" | sudo tee /etc/hostname
	I0916 03:34:24.681698    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-632000
	
	I0916 03:34:24.681716    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.681845    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.681945    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.682035    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.682124    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.682269    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.682425    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.682436    3304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:34:24.745829    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:34:24.745851    3304 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 03:34:24.745869    3304 buildroot.go:174] setting up certificates
	I0916 03:34:24.745877    3304 provision.go:84] configureAuth start
	I0916 03:34:24.745886    3304 main.go:141] libmachine: (ha-632000) Calling .GetMachineName
	I0916 03:34:24.746023    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:24.746113    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.746208    3304 provision.go:143] copyHostCerts
	I0916 03:34:24.746243    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:34:24.746316    3304 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem, removing ...
	I0916 03:34:24.746324    3304 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:34:24.746479    3304 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 03:34:24.746699    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:34:24.746738    3304 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem, removing ...
	I0916 03:34:24.746743    3304 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:34:24.746868    3304 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 03:34:24.747030    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:34:24.747068    3304 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem, removing ...
	I0916 03:34:24.747073    3304 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:34:24.747148    3304 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 03:34:24.747332    3304 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.ha-632000 san=[127.0.0.1 192.169.0.5 ha-632000 localhost minikube]
	I0916 03:34:24.788917    3304 provision.go:177] copyRemoteCerts
	I0916 03:34:24.788977    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:34:24.788991    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.789102    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.789198    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.789301    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.789395    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:24.825705    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 03:34:24.825778    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 03:34:24.845370    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 03:34:24.845432    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 03:34:24.865178    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 03:34:24.865251    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 03:34:24.884831    3304 provision.go:87] duration metric: took 138.944021ms to configureAuth
	I0916 03:34:24.884843    3304 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:34:24.884967    3304 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:34:24.884979    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:24.885124    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.885220    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.885311    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.885386    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.885471    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.885588    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.885725    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.885733    3304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:34:24.943488    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:34:24.943500    3304 buildroot.go:70] root file system type: tmpfs
	I0916 03:34:24.943574    3304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:34:24.943589    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:24.943717    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:24.943811    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.943888    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:24.943974    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:24.944108    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:24.944249    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:24.944292    3304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:34:25.012349    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:34:25.012368    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:25.012501    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:25.012596    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:25.012680    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:25.012785    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:25.012929    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:25.013072    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:25.013083    3304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:34:26.666741    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:34:26.666755    3304 machine.go:96] duration metric: took 15.113997249s to provisionDockerMachine
	I0916 03:34:26.666768    3304 start.go:293] postStartSetup for "ha-632000" (driver="hyperkit")
	I0916 03:34:26.666776    3304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:34:26.666787    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.666978    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:34:26.666992    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.667087    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.667194    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.667273    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.667350    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:26.706592    3304 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:34:26.711498    3304 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:34:26.711513    3304 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 03:34:26.711618    3304 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 03:34:26.711797    3304 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> 16492.pem in /etc/ssl/certs
	I0916 03:34:26.711804    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /etc/ssl/certs/16492.pem
	I0916 03:34:26.712020    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 03:34:26.722253    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:34:26.749449    3304 start.go:296] duration metric: took 82.672854ms for postStartSetup
	I0916 03:34:26.749482    3304 fix.go:56] duration metric: took 15.371327135s for fixHost
	I0916 03:34:26.749494    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.749632    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.749729    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.749823    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.749915    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.750058    3304 main.go:141] libmachine: Using SSH client type: native
	I0916 03:34:26.750196    3304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1a06820] 0x1a09500 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0916 03:34:26.750203    3304 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:34:26.808207    3304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482866.972905014
	
	I0916 03:34:26.808217    3304 fix.go:216] guest clock: 1726482866.972905014
	I0916 03:34:26.808223    3304 fix.go:229] Guest: 2024-09-16 03:34:26.972905014 -0700 PDT Remote: 2024-09-16 03:34:26.749485 -0700 PDT m=+15.815539300 (delta=223.420014ms)
	I0916 03:34:26.808242    3304 fix.go:200] guest clock delta is within tolerance: 223.420014ms
	I0916 03:34:26.808245    3304 start.go:83] releasing machines lock for "ha-632000", held for 15.430137129s
	I0916 03:34:26.808262    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.808401    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:26.808513    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.808823    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.808942    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:26.809043    3304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:34:26.809076    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.809101    3304 ssh_runner.go:195] Run: cat /version.json
	I0916 03:34:26.809110    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:26.809188    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.809211    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:26.809279    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.809305    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:26.809350    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.809401    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:26.809429    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:26.809486    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:26.841629    3304 ssh_runner.go:195] Run: systemctl --version
	I0916 03:34:26.888749    3304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:34:26.895634    3304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:34:26.895690    3304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:34:26.909307    3304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:34:26.909320    3304 start.go:495] detecting cgroup driver to use...
	I0916 03:34:26.909423    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:34:26.924212    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:34:26.933397    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:34:26.942270    3304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:34:26.942318    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:34:26.951238    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:34:26.960242    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:34:26.969261    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:34:26.978143    3304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:34:26.987134    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:34:26.996067    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:34:27.004962    3304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:34:27.013761    3304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:34:27.021787    3304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:34:27.029829    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:27.137695    3304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:34:27.155569    3304 start.go:495] detecting cgroup driver to use...
	I0916 03:34:27.155665    3304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:34:27.175875    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:34:27.186405    3304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:34:27.201501    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:34:27.212338    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:34:27.223624    3304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:34:27.245802    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:34:27.256277    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:34:27.271232    3304 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:34:27.274402    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:34:27.281587    3304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:34:27.300136    3304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:34:27.396868    3304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:34:27.504053    3304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:34:27.504125    3304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:34:27.518529    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:27.609526    3304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:34:29.935674    3304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.326163573s)
	I0916 03:34:29.935744    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 03:34:29.947246    3304 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 03:34:29.961201    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:34:29.971947    3304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 03:34:30.067117    3304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 03:34:30.160710    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:30.263470    3304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 03:34:30.277335    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:34:30.288368    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:30.383376    3304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 03:34:30.442288    3304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 03:34:30.442381    3304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 03:34:30.446794    3304 start.go:563] Will wait 60s for crictl version
	I0916 03:34:30.446848    3304 ssh_runner.go:195] Run: which crictl
	I0916 03:34:30.449770    3304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 03:34:30.480712    3304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 03:34:30.480801    3304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:34:30.506592    3304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:34:30.566841    3304 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 03:34:30.566887    3304 main.go:141] libmachine: (ha-632000) Calling .GetIP
	I0916 03:34:30.567295    3304 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0916 03:34:30.572033    3304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:34:30.581666    3304 kubeadm.go:883] updating cluster {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 03:34:30.581730    3304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:34:30.581809    3304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:34:30.595489    3304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 03:34:30.595503    3304 docker.go:615] Images already preloaded, skipping extraction
	I0916 03:34:30.595591    3304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 03:34:30.607238    3304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 03:34:30.607254    3304 cache_images.go:84] Images are preloaded, skipping loading
	I0916 03:34:30.607262    3304 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.1 docker true true} ...
	I0916 03:34:30.607335    3304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-632000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 03:34:30.607418    3304 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 03:34:30.640314    3304 cni.go:84] Creating CNI manager for ""
	I0916 03:34:30.640327    3304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:34:30.640333    3304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 03:34:30.640349    3304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-632000 NodeName:ha-632000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 03:34:30.640444    3304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-632000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 03:34:30.640518    3304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 03:34:30.648388    3304 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 03:34:30.648440    3304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 03:34:30.655689    3304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0916 03:34:30.669172    3304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 03:34:30.682497    3304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0916 03:34:30.696410    3304 ssh_runner.go:195] Run: grep 192.169.0.5	control-plane.minikube.internal$ /etc/hosts
	I0916 03:34:30.699419    3304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:34:30.708827    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:30.800708    3304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:34:30.815503    3304 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000 for IP: 192.169.0.5
	I0916 03:34:30.815516    3304 certs.go:194] generating shared ca certs ...
	I0916 03:34:30.815526    3304 certs.go:226] acquiring lock for ca certs: {Name:mkbe6743287669047aae1be1aaf47d24793fafa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:30.815719    3304 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key
	I0916 03:34:30.815802    3304 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key
	I0916 03:34:30.815814    3304 certs.go:256] generating profile certs ...
	I0916 03:34:30.815917    3304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key
	I0916 03:34:30.816003    3304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.63f9e250
	I0916 03:34:30.816074    3304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key
	I0916 03:34:30.816082    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 03:34:30.816104    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 03:34:30.816121    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 03:34:30.816140    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 03:34:30.816156    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 03:34:30.816185    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 03:34:30.816213    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 03:34:30.816231    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 03:34:30.816330    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem (1338 bytes)
	W0916 03:34:30.816382    3304 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649_empty.pem, impossibly tiny 0 bytes
	I0916 03:34:30.816391    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 03:34:30.816426    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem (1082 bytes)
	I0916 03:34:30.816460    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem (1123 bytes)
	I0916 03:34:30.816489    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem (1675 bytes)
	I0916 03:34:30.816555    3304 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:34:30.816589    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem -> /usr/share/ca-certificates/1649.pem
	I0916 03:34:30.816610    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /usr/share/ca-certificates/16492.pem
	I0916 03:34:30.816628    3304 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:30.817136    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 03:34:30.847112    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 03:34:30.870611    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 03:34:30.894802    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 03:34:30.924616    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0916 03:34:30.952802    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 03:34:30.972374    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 03:34:30.992413    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 03:34:31.012243    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem --> /usr/share/ca-certificates/1649.pem (1338 bytes)
	I0916 03:34:31.032033    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /usr/share/ca-certificates/16492.pem (1708 bytes)
	I0916 03:34:31.051855    3304 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 03:34:31.071449    3304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 03:34:31.085070    3304 ssh_runner.go:195] Run: openssl version
	I0916 03:34:31.089404    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 03:34:31.097650    3304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:31.101068    3304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:31.101109    3304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:34:31.105337    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 03:34:31.113599    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1649.pem && ln -fs /usr/share/ca-certificates/1649.pem /etc/ssl/certs/1649.pem"
	I0916 03:34:31.122322    3304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1649.pem
	I0916 03:34:31.126104    3304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:24 /usr/share/ca-certificates/1649.pem
	I0916 03:34:31.126171    3304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1649.pem
	I0916 03:34:31.130938    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1649.pem /etc/ssl/certs/51391683.0"
	I0916 03:34:31.139986    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16492.pem && ln -fs /usr/share/ca-certificates/16492.pem /etc/ssl/certs/16492.pem"
	I0916 03:34:31.149139    3304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16492.pem
	I0916 03:34:31.152974    3304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:24 /usr/share/ca-certificates/16492.pem
	I0916 03:34:31.153041    3304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16492.pem
	I0916 03:34:31.157722    3304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16492.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 03:34:31.166758    3304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 03:34:31.170375    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 03:34:31.174736    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 03:34:31.178948    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 03:34:31.183262    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 03:34:31.187460    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 03:34:31.191725    3304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 03:34:31.195983    3304 kubeadm.go:392] StartCluster: {Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:34:31.196097    3304 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:34:31.208147    3304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 03:34:31.215656    3304 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 03:34:31.215667    3304 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 03:34:31.215706    3304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 03:34:31.223055    3304 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:34:31.223382    3304 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-632000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:31.223473    3304 kubeconfig.go:62] /Users/jenkins/minikube-integration/19651-1087/kubeconfig needs updating (will repair): [kubeconfig missing "ha-632000" cluster setting kubeconfig missing "ha-632000" context setting]
	I0916 03:34:31.223667    3304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/kubeconfig: {Name:mk8fc931c106eab82d9969ffc5e44e59dbd815a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:31.224198    3304 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:31.224396    3304 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x30dc720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:34:31.224736    3304 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 03:34:31.224912    3304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 03:34:31.232171    3304 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0916 03:34:31.232190    3304 kubeadm.go:1160] stopping kube-system containers ...
	I0916 03:34:31.232260    3304 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 03:34:31.247135    3304 docker.go:483] Stopping containers: [9cdebd7b24ee 1a9b8d16193d b3d5179d6125 943f7896df50 24a42374ff36 ea367824f743 da46c43d13fe fae437a3e240 b16c72338b6a 7d1ed68d9ed6 6dd1718ae6cb 69e9ce19f136 95857f7ef6fa 24a44bacb8b1 04ad9902d4d1 cf944230e79e 61ff2b466426 3a54b2c915f8]
	I0916 03:34:31.247235    3304 ssh_runner.go:195] Run: docker stop 9cdebd7b24ee 1a9b8d16193d b3d5179d6125 943f7896df50 24a42374ff36 ea367824f743 da46c43d13fe fae437a3e240 b16c72338b6a 7d1ed68d9ed6 6dd1718ae6cb 69e9ce19f136 95857f7ef6fa 24a44bacb8b1 04ad9902d4d1 cf944230e79e 61ff2b466426 3a54b2c915f8
	I0916 03:34:31.264064    3304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 03:34:31.275855    3304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 03:34:31.284502    3304 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 03:34:31.284510    3304 kubeadm.go:157] found existing configuration files:
	
	I0916 03:34:31.284556    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 03:34:31.291523    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 03:34:31.291565    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 03:34:31.298958    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 03:34:31.306045    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 03:34:31.306093    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 03:34:31.313603    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 03:34:31.320619    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 03:34:31.320661    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 03:34:31.328130    3304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 03:34:31.335290    3304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 03:34:31.335333    3304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 03:34:31.342670    3304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 03:34:31.350069    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:31.414838    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.543650    3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.128812527s)
	I0916 03:34:32.543673    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.699189    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.752257    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:32.803312    3304 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:34:32.803394    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:34:33.303990    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:34:33.803493    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:34:33.816653    3304 api_server.go:72] duration metric: took 1.013365866s to wait for apiserver process to appear ...
	I0916 03:34:33.816667    3304 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:34:33.816684    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:36.439673    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 03:34:36.439690    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 03:34:36.439698    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:36.462051    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0916 03:34:36.462067    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0916 03:34:36.816809    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:36.821366    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:34:36.821386    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:34:37.316789    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:37.325604    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 03:34:37.325621    3304 api_server.go:103] status: https://192.169.0.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 03:34:37.817112    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:34:37.821324    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0916 03:34:37.821383    3304 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0916 03:34:37.821389    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:37.821396    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:37.821406    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:37.826801    3304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 03:34:37.826862    3304 api_server.go:141] control plane version: v1.31.1
	I0916 03:34:37.826872    3304 api_server.go:131] duration metric: took 4.010264636s to wait for apiserver health ...
	I0916 03:34:37.826879    3304 cni.go:84] Creating CNI manager for ""
	I0916 03:34:37.826883    3304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:34:37.851204    3304 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 03:34:37.871931    3304 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 03:34:37.877049    3304 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 03:34:37.877061    3304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 03:34:37.909782    3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 03:34:38.392394    3304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:34:38.392445    3304 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 03:34:38.392455    3304 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 03:34:38.392506    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:34:38.392512    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.392517    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.392522    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.396201    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:38.399569    3304 system_pods.go:59] 9 kube-system pods found
	I0916 03:34:38.399589    3304 system_pods.go:61] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 03:34:38.399596    3304 system_pods.go:61] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 03:34:38.399602    3304 system_pods.go:61] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 03:34:38.399606    3304 system_pods.go:61] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:34:38.399611    3304 system_pods.go:61] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 03:34:38.399615    3304 system_pods.go:61] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 03:34:38.399618    3304 system_pods.go:61] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:34:38.399624    3304 system_pods.go:61] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 03:34:38.399627    3304 system_pods.go:61] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running
	I0916 03:34:38.399631    3304 system_pods.go:74] duration metric: took 7.226704ms to wait for pod list to return data ...
	I0916 03:34:38.399639    3304 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:34:38.399678    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0916 03:34:38.399683    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.399695    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.399700    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.407687    3304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 03:34:38.408404    3304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:34:38.408417    3304 node_conditions.go:123] node cpu capacity is 2
	I0916 03:34:38.408426    3304 node_conditions.go:105] duration metric: took 8.78368ms to run NodePressure ...
	I0916 03:34:38.408437    3304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 03:34:38.708991    3304 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 03:34:38.709059    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0916 03:34:38.709065    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.709071    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.709079    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.710702    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.711793    3304 kubeadm.go:739] kubelet initialised
	I0916 03:34:38.711802    3304 kubeadm.go:740] duration metric: took 2.798868ms waiting for restarted kubelet to initialise ...
	I0916 03:34:38.711809    3304 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:34:38.711861    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:34:38.711867    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.711873    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.711877    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.713532    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.715488    3304 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.715533    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:38.715538    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.715554    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.715559    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.716876    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.718440    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.718449    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.718454    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.718458    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.720116    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.720480    3304 pod_ready.go:98] node "ha-632000" hosting pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.720490    3304 pod_ready.go:82] duration metric: took 4.992557ms for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.720496    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.720501    3304 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.720529    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kmtsq
	I0916 03:34:38.720535    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.720540    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.720545    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.721899    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.722296    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.722303    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.722309    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.722311    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.723571    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.723979    3304 pod_ready.go:98] node "ha-632000" hosting pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.723988    3304 pod_ready.go:82] duration metric: took 3.482993ms for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.723994    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.723998    3304 pod_ready.go:79] waiting up to 4m0s for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.724022    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-632000
	I0916 03:34:38.724027    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.724032    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.724037    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.725189    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.725718    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.725725    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.725730    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.725734    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.726882    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.727191    3304 pod_ready.go:98] node "ha-632000" hosting pod "etcd-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.727201    3304 pod_ready.go:82] duration metric: took 3.198228ms for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.727206    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "etcd-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.727213    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.727242    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-632000
	I0916 03:34:38.727247    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.727252    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.727256    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.728489    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:38.793699    3304 request.go:632] Waited for 64.878296ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.793776    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:38.793786    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.793798    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.793825    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.796515    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:38.797110    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-apiserver-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.797125    3304 pod_ready.go:82] duration metric: took 69.907608ms for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:38.797133    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-apiserver-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:38.797139    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:38.993735    3304 request.go:632] Waited for 196.525215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:34:38.993809    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:34:38.993850    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:38.993865    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:38.993871    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:38.997843    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:39.193300    3304 request.go:632] Waited for 194.87785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.193380    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.193388    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.193397    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.193401    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.195202    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:39.195482    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-controller-manager-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.195492    3304 pod_ready.go:82] duration metric: took 398.353384ms for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:39.195500    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-controller-manager-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.195505    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:39.392617    3304 request.go:632] Waited for 197.072336ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:34:39.392682    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:34:39.392689    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.392697    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.392704    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.395196    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:39.593726    3304 request.go:632] Waited for 198.129679ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.593859    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.593865    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.593873    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.593878    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.595714    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:39.596185    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-proxy-g95xd" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.596200    3304 pod_ready.go:82] duration metric: took 400.697103ms for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:39.596209    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-proxy-g95xd" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.596215    3304 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:39.792553    3304 request.go:632] Waited for 196.293737ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:34:39.792591    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:34:39.792595    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.792601    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.792617    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.795184    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:39.993050    3304 request.go:632] Waited for 197.435612ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.993096    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:39.993104    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:39.993114    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:39.993122    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:39.996204    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:39.996546    3304 pod_ready.go:98] node "ha-632000" hosting pod "kube-scheduler-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.996566    3304 pod_ready.go:82] duration metric: took 400.341694ms for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	E0916 03:34:39.996575    3304 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-632000" hosting pod "kube-scheduler-ha-632000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-632000" has status "Ready":"False"
	I0916 03:34:39.996582    3304 pod_ready.go:39] duration metric: took 1.284786546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:34:39.996594    3304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 03:34:40.010623    3304 ops.go:34] apiserver oom_adj: -16
	I0916 03:34:40.010643    3304 kubeadm.go:597] duration metric: took 8.795112796s to restartPrimaryControlPlane
	I0916 03:34:40.010652    3304 kubeadm.go:394] duration metric: took 8.814823261s to StartCluster
	I0916 03:34:40.010667    3304 settings.go:142] acquiring lock: {Name:mk3e31582aa73ce4c6a48b30dd26240f48d1f910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:40.010790    3304 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:40.011227    3304 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/kubeconfig: {Name:mk8fc931c106eab82d9969ffc5e44e59dbd815a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:34:40.011508    3304 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 03:34:40.011531    3304 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 03:34:40.011584    3304 addons.go:69] Setting storage-provisioner=true in profile "ha-632000"
	I0916 03:34:40.011604    3304 addons.go:234] Setting addon storage-provisioner=true in "ha-632000"
	I0916 03:34:40.011608    3304 addons.go:69] Setting default-storageclass=true in profile "ha-632000"
	W0916 03:34:40.011621    3304 addons.go:243] addon storage-provisioner should already be in state true
	I0916 03:34:40.011628    3304 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:34:40.011632    3304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-632000"
	I0916 03:34:40.011649    3304 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:34:40.011914    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.011928    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.011936    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.011951    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.020914    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51044
	I0916 03:34:40.020919    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51045
	I0916 03:34:40.021274    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.021295    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.021593    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.021595    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.021606    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.021607    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.021814    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.021881    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.021929    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:40.022023    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:40.022096    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:40.022273    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.022303    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.024436    3304 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:34:40.024681    3304 kapi.go:59] client config for ha-632000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x30dc720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 03:34:40.024973    3304 addons.go:234] Setting addon default-storageclass=true in "ha-632000"
	W0916 03:34:40.024982    3304 addons.go:243] addon default-storageclass should already be in state true
	I0916 03:34:40.024997    3304 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:34:40.025213    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.025241    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.030857    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51048
	I0916 03:34:40.031184    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.031486    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.031503    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.031718    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.031837    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:40.031920    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:40.031987    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:40.032954    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:40.033423    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51050
	I0916 03:34:40.034570    3304 out.go:177] * Verifying Kubernetes components...
	I0916 03:34:40.035362    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.056602    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.056630    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.057113    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.057705    3304 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:34:40.057742    3304 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:34:40.066758    3304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51052
	I0916 03:34:40.067114    3304 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:34:40.067418    3304 main.go:141] libmachine: Using API Version  1
	I0916 03:34:40.067429    3304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:34:40.067620    3304 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:34:40.067747    3304 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:34:40.067836    3304 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:34:40.067909    3304 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:34:40.068849    3304 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:34:40.068980    3304 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 03:34:40.068987    3304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 03:34:40.068995    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:40.069083    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:40.069170    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:40.069263    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:40.069351    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:40.092378    3304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 03:34:40.114656    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 03:34:40.129518    3304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:34:40.150612    3304 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:34:40.150628    3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 03:34:40.150652    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:34:40.150820    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:34:40.150919    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:34:40.151009    3304 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:34:40.151105    3304 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:34:40.312530    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.312544    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.312710    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.312718    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.312726    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.312730    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.312744    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.312859    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.312865    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.312868    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.312950    3304 round_trippers.go:463] GET https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 03:34:40.312957    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.312962    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.312967    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.314896    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.315270    3304 round_trippers.go:463] PUT https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 03:34:40.315277    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.315282    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.315286    3304 round_trippers.go:473]     Content-Type: application/json
	I0916 03:34:40.315288    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.317294    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.317394    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.317402    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.317552    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.317560    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.317571    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.365668    3304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:34:40.376406    3304 node_ready.go:35] waiting up to 6m0s for node "ha-632000" to be "Ready" ...
	I0916 03:34:40.376479    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:40.376485    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.376491    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.376494    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.378267    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.426079    3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 03:34:40.876657    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:40.876672    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:40.876678    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:40.876681    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:40.878242    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:40.883063    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.883074    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.883237    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.883246    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.883255    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.883256    3304 main.go:141] libmachine: Making call to close driver server
	I0916 03:34:40.883283    3304 main.go:141] libmachine: (ha-632000) Calling .Close
	I0916 03:34:40.883399    3304 main.go:141] libmachine: Successfully made call to close driver server
	I0916 03:34:40.883408    3304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 03:34:40.883419    3304 main.go:141] libmachine: (ha-632000) DBG | Closing plugin on server side
	I0916 03:34:40.904880    3304 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 03:34:40.945708    3304 addons.go:510] duration metric: took 934.197445ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 03:34:41.376796    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:41.376817    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:41.376828    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:41.376835    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:41.379843    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:41.878402    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:41.878423    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:41.878434    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:41.878441    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:41.880784    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:42.377995    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:42.378017    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:42.378028    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:42.378036    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:42.380149    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:42.380565    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:42.877115    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:42.877132    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:42.877138    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:42.877142    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:42.878996    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:43.377951    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:43.377971    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:43.377979    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:43.377983    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:43.380950    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:43.877542    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:43.877568    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:43.877585    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:43.877597    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:43.880401    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:44.378673    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:44.378699    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:44.378711    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:44.378718    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:44.380901    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:44.381440    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:44.878590    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:44.878617    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:44.878627    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:44.878633    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:44.881276    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:45.376997    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:45.377021    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:45.377032    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:45.377038    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:45.379757    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:45.877711    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:45.877733    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:45.877744    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:45.877750    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:45.880462    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:46.378583    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:46.378609    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:46.378626    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:46.378635    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:46.381444    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:46.381801    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:46.877038    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:46.877050    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:46.877056    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:46.877059    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:46.878411    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:47.377110    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:47.377131    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:47.377143    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:47.377149    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:47.379987    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:47.877576    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:47.877593    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:47.877602    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:47.877609    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:47.879788    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:48.377874    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:48.377894    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:48.377903    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:48.377908    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:48.380037    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:48.877671    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:48.877726    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:48.877745    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:48.877755    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:48.880509    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:48.881074    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:49.378095    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:49.378122    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:49.378133    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:49.378142    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:49.381074    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:49.877863    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:49.877880    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:49.877889    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:49.877893    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:49.880159    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:50.377505    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:50.377516    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:50.377523    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:50.377526    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:50.379284    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:50.876843    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:50.876865    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:50.876876    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:50.876883    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:50.879406    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:51.378297    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:51.378323    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:51.378361    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:51.378375    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:51.381048    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:51.381405    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:51.876417    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:51.876433    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:51.876441    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:51.876445    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:51.878213    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:52.377976    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:52.377998    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:52.378009    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:52.378017    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:52.380591    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:52.876571    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:52.876592    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:52.876609    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:52.876615    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:52.879141    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:53.378471    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:53.378497    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:53.378509    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:53.378514    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:53.381242    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:53.381593    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:53.878495    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:53.878519    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:53.878531    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:53.878538    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:53.880732    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:54.376608    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:54.376631    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:54.376642    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:54.376649    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:54.379663    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:54.876659    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:54.876681    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:54.876694    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:54.876701    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:54.879185    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:55.376950    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:55.376977    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:55.376987    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:55.376993    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:55.379583    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:55.877176    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:55.877202    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:55.877214    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:55.877222    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:55.879951    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:55.880276    3304 node_ready.go:53] node "ha-632000" has status "Ready":"False"
	I0916 03:34:56.377034    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:56.377056    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:56.377068    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:56.377075    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:56.379858    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:56.876661    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:56.876682    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:56.876696    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:56.876710    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:56.879328    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:57.377125    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:57.377167    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.377177    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.377182    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.379516    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:57.379811    3304 node_ready.go:49] node "ha-632000" has status "Ready":"True"
	I0916 03:34:57.379823    3304 node_ready.go:38] duration metric: took 17.003672807s for node "ha-632000" to be "Ready" ...
	I0916 03:34:57.379829    3304 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:34:57.379871    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:34:57.379877    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.379882    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.379886    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.381644    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:57.383606    3304 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:34:57.383645    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:57.383650    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.383655    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.383658    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.384733    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:57.385106    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:57.385113    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.385118    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.385122    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.386159    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:57.884163    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:57.884182    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.884189    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.884195    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.886295    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:57.886956    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:57.886964    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:57.886970    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:57.886973    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:57.888563    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:58.384115    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:58.384137    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.384149    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.384155    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.387276    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:58.387749    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:58.387759    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.387767    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.387792    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.389133    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:58.885226    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:58.885246    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.885275    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.885281    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.886994    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:58.887489    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:58.887496    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:58.887502    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:58.887506    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:58.888603    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:59.385814    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:59.385840    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.385851    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.385858    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.388513    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:34:59.389033    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:59.389044    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.389051    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.389062    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.390522    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:34:59.391501    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:34:59.884866    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:34:59.884892    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.884903    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.884911    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.887993    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:34:59.888511    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:34:59.888521    3304 round_trippers.go:469] Request Headers:
	I0916 03:34:59.888529    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:34:59.888533    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:34:59.891452    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:00.384323    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:00.384338    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.384347    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.384353    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.386438    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:00.386904    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:00.386911    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.386917    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.386920    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.388042    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:00.884219    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:00.884241    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.884260    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.884267    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.887069    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:00.887514    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:00.887523    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:00.887530    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:00.887535    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:00.888924    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:01.385822    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:01.385844    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.385855    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.385862    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.388461    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:01.388907    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:01.388916    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.388924    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.388933    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.390336    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:01.884266    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:01.884323    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.884336    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.884344    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.887031    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:01.887802    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:01.887813    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:01.887821    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:01.887826    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:01.889176    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:01.889480    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:02.384160    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:02.384181    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.384191    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.384197    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.387183    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:02.387689    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:02.387699    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.387707    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.387722    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.389179    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:02.885122    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:02.885146    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.885157    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.885165    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.887435    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:02.887961    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:02.887968    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:02.887974    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:02.887977    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:02.889091    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:03.385251    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:03.385279    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.385291    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.385298    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.387871    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:03.388816    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:03.388826    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.388834    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.388839    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.390130    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:03.884630    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:03.884656    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.884666    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.884673    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.887524    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:03.888016    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:03.888027    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:03.888033    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:03.888038    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:03.889721    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:03.890029    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:04.383979    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:04.384001    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.384013    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.384018    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.386805    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:04.387373    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:04.387384    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.387398    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.387405    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.388849    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:04.884523    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:04.884548    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.884560    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.884565    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.887300    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:04.888008    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:04.888019    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:04.888027    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:04.888032    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:04.889577    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:05.384175    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:05.384198    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.384215    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.384222    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.386418    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:05.387016    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:05.387024    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.387029    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.387034    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.388282    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:05.884900    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:05.884925    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.884937    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.884944    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.887666    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:05.888293    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:05.888304    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:05.888311    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:05.888317    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:05.889918    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:05.890216    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:06.385456    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:06.385472    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.385480    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.385486    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.387518    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:06.388054    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:06.388061    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.388068    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.388086    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.389437    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:06.884205    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:06.884230    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.884242    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.884247    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.887550    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:06.888131    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:06.888139    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:06.888145    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:06.888148    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:06.889317    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.383655    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:07.383672    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.383693    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.383698    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.385537    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.385926    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:07.385933    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.385940    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.385943    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.387166    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.885661    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:07.885688    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.885697    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.885702    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.888637    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:07.889170    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:07.889181    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:07.889189    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:07.889195    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:07.890632    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:07.891098    3304 pod_ready.go:103] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"False"
	I0916 03:35:08.385687    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:08.385714    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.385725    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.385740    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.388506    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:08.388962    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:08.388972    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.388980    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.388998    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.390628    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:08.883835    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:08.883850    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.883857    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.883860    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.886160    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:08.886618    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:08.886626    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:08.886632    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:08.886635    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:08.887892    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.384848    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-jkv8h
	I0916 03:35:09.384865    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.384873    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.384876    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.387304    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:09.387775    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.387783    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.387790    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.387793    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.389013    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.389277    3304 pod_ready.go:93] pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.389286    3304 pod_ready.go:82] duration metric: took 12.005863458s for pod "coredns-7c65d6cfc9-jkv8h" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.389293    3304 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.389324    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kmtsq
	I0916 03:35:09.389330    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.389336    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.389339    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.390307    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:09.390713    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.390720    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.390725    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.390730    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.391675    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:09.391982    3304 pod_ready.go:93] pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.391991    3304 pod_ready.go:82] duration metric: took 2.694094ms for pod "coredns-7c65d6cfc9-kmtsq" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.391997    3304 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.392029    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-632000
	I0916 03:35:09.392034    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.392040    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.392043    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.393164    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.393744    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.393751    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.393757    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.393761    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.394666    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:09.394943    3304 pod_ready.go:93] pod "etcd-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.394951    3304 pod_ready.go:82] duration metric: took 2.948803ms for pod "etcd-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.394959    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.394992    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-632000
	I0916 03:35:09.394997    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.395003    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.395007    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.396122    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.396466    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.396473    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.396477    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.396480    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.397544    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.397850    3304 pod_ready.go:93] pod "kube-apiserver-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.397857    3304 pod_ready.go:82] duration metric: took 2.893653ms for pod "kube-apiserver-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.397863    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.397889    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-632000
	I0916 03:35:09.397894    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.397900    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.397905    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.399129    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.399515    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.399522    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.399527    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.399530    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.400631    3304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 03:35:09.400898    3304 pod_ready.go:93] pod "kube-controller-manager-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.400906    3304 pod_ready.go:82] duration metric: took 3.037606ms for pod "kube-controller-manager-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.400918    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.585908    3304 request.go:632] Waited for 184.827903ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:35:09.585952    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g95xd
	I0916 03:35:09.585960    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.585968    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.585976    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.588274    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:09.786885    3304 request.go:632] Waited for 198.109185ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.787049    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:09.787060    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.787071    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.787079    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.789763    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:09.790084    3304 pod_ready.go:93] pod "kube-proxy-g95xd" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:09.790096    3304 pod_ready.go:82] duration metric: took 389.178105ms for pod "kube-proxy-g95xd" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.790106    3304 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:09.986947    3304 request.go:632] Waited for 196.780374ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:35:09.987087    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-632000
	I0916 03:35:09.987098    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:09.987109    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:09.987122    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:09.989733    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:10.186939    3304 request.go:632] Waited for 196.171001ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:10.187059    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-632000
	I0916 03:35:10.187069    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.187089    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.187095    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.189807    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:10.190399    3304 pod_ready.go:93] pod "kube-scheduler-ha-632000" in "kube-system" namespace has status "Ready":"True"
	I0916 03:35:10.190418    3304 pod_ready.go:82] duration metric: took 400.304067ms for pod "kube-scheduler-ha-632000" in "kube-system" namespace to be "Ready" ...
	I0916 03:35:10.190428    3304 pod_ready.go:39] duration metric: took 12.810794631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 03:35:10.190446    3304 api_server.go:52] waiting for apiserver process to appear ...
	I0916 03:35:10.190528    3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:35:10.203518    3304 api_server.go:72] duration metric: took 30.192477873s to wait for apiserver process to appear ...
	I0916 03:35:10.203529    3304 api_server.go:88] waiting for apiserver healthz status ...
	I0916 03:35:10.203539    3304 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:35:10.206990    3304 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0916 03:35:10.207032    3304 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0916 03:35:10.207038    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.207044    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.207049    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.207501    3304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 03:35:10.207560    3304 api_server.go:141] control plane version: v1.31.1
	I0916 03:35:10.207569    3304 api_server.go:131] duration metric: took 4.036294ms to wait for apiserver health ...
	I0916 03:35:10.207574    3304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 03:35:10.385757    3304 request.go:632] Waited for 178.107675ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.385849    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.385859    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.385869    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.385875    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.389657    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:10.392081    3304 system_pods.go:59] 9 kube-system pods found
	I0916 03:35:10.392092    3304 system_pods.go:61] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:35:10.392096    3304 system_pods.go:61] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:35:10.392098    3304 system_pods.go:61] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:35:10.392101    3304 system_pods.go:61] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:35:10.392103    3304 system_pods.go:61] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:35:10.392106    3304 system_pods.go:61] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:35:10.392109    3304 system_pods.go:61] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:35:10.392113    3304 system_pods.go:61] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:35:10.392116    3304 system_pods.go:61] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 03:35:10.392122    3304 system_pods.go:74] duration metric: took 184.547797ms to wait for pod list to return data ...
	I0916 03:35:10.392132    3304 default_sa.go:34] waiting for default service account to be created ...
	I0916 03:35:10.585358    3304 request.go:632] Waited for 193.141273ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:35:10.585408    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0916 03:35:10.585416    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.585427    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.585471    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.588788    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:10.588946    3304 default_sa.go:45] found service account: "default"
	I0916 03:35:10.588958    3304 default_sa.go:55] duration metric: took 196.824843ms for default service account to be created ...
	I0916 03:35:10.588968    3304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 03:35:10.785029    3304 request.go:632] Waited for 196.001491ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.785170    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0916 03:35:10.785182    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.785198    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.785205    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.788727    3304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 03:35:10.790881    3304 system_pods.go:86] 9 kube-system pods found
	I0916 03:35:10.790911    3304 system_pods.go:89] "coredns-7c65d6cfc9-jkv8h" [b174db06-0ade-4594-8b4b-6058301491f8] Running
	I0916 03:35:10.790916    3304 system_pods.go:89] "coredns-7c65d6cfc9-kmtsq" [535c3a5b-013f-4bdc-94c4-b109c6f43b0b] Running
	I0916 03:35:10.790919    3304 system_pods.go:89] "etcd-ha-632000" [168ae9d8-0de1-4283-95f4-599416ba05af] Running
	I0916 03:35:10.790922    3304 system_pods.go:89] "kindnet-gs2gm" [9931709a-b521-4642-8c91-c0bb52b9f284] Running
	I0916 03:35:10.790924    3304 system_pods.go:89] "kube-apiserver-ha-632000" [ca97e71b-48a4-4a80-9ad7-508c65db3fbb] Running
	I0916 03:35:10.790927    3304 system_pods.go:89] "kube-controller-manager-ha-632000" [86b88210-1110-4966-9149-f2a83a39d0b2] Running
	I0916 03:35:10.790935    3304 system_pods.go:89] "kube-proxy-g95xd" [98c1fdde-9eb3-4546-9d59-b18e4367907d] Running
	I0916 03:35:10.790939    3304 system_pods.go:89] "kube-scheduler-ha-632000" [1f602ad4-590a-4e8d-9b23-7fcded3a4b57] Running
	I0916 03:35:10.790945    3304 system_pods.go:89] "storage-provisioner" [75558cac-fdbb-4a54-94c5-31f3f3d980c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 03:35:10.790951    3304 system_pods.go:126] duration metric: took 201.981143ms to wait for k8s-apps to be running ...
	I0916 03:35:10.790964    3304 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 03:35:10.791037    3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:35:10.804245    3304 system_svc.go:56] duration metric: took 13.277206ms WaitForService to wait for kubelet
	I0916 03:35:10.804260    3304 kubeadm.go:582] duration metric: took 30.793230339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 03:35:10.804274    3304 node_conditions.go:102] verifying NodePressure condition ...
	I0916 03:35:10.985571    3304 request.go:632] Waited for 181.22969ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0916 03:35:10.985617    3304 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0916 03:35:10.985625    3304 round_trippers.go:469] Request Headers:
	I0916 03:35:10.985636    3304 round_trippers.go:473]     Accept: application/json, */*
	I0916 03:35:10.985645    3304 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0916 03:35:10.988104    3304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 03:35:10.988460    3304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 03:35:10.988470    3304 node_conditions.go:123] node cpu capacity is 2
	I0916 03:35:10.988475    3304 node_conditions.go:105] duration metric: took 184.185428ms to run NodePressure ...
	I0916 03:35:10.988484    3304 start.go:241] waiting for startup goroutines ...
	I0916 03:35:10.988489    3304 start.go:246] waiting for cluster config update ...
	I0916 03:35:10.988495    3304 start.go:255] writing updated cluster config ...
	I0916 03:35:10.989154    3304 ssh_runner.go:195] Run: rm -f paused
	I0916 03:35:11.029032    3304 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0916 03:35:11.049786    3304 out.go:201] 
	W0916 03:35:11.070937    3304 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0916 03:35:11.091559    3304 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0916 03:35:11.134086    3304 out.go:177] * Done! kubectl is now configured to use "ha-632000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 10:34:37 ha-632000 dockerd[928]: time="2024-09-16T10:34:37.593663222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:34:37 ha-632000 dockerd[928]: time="2024-09-16T10:34:37.593673564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:34:37 ha-632000 dockerd[928]: time="2024-09-16T10:34:37.593881815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:07 ha-632000 dockerd[922]: time="2024-09-16T10:35:07.563121477Z" level=info msg="ignoring event" container=d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 10:35:07 ha-632000 dockerd[928]: time="2024-09-16T10:35:07.563622963Z" level=info msg="shim disconnected" id=d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba namespace=moby
	Sep 16 10:35:07 ha-632000 dockerd[928]: time="2024-09-16T10:35:07.563954778Z" level=warning msg="cleaning up after shim disconnected" id=d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba namespace=moby
	Sep 16 10:35:07 ha-632000 dockerd[928]: time="2024-09-16T10:35:07.563989325Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865227113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865264506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865295808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:08 ha-632000 dockerd[928]: time="2024-09-16T10:35:08.865376056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:08 ha-632000 cri-dockerd[1176]: time="2024-09-16T10:35:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae26258aa302306594e1686d25b0188561b24033ecc3092a2b33b8cec7c9026a/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051481955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051545488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051558461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.051819249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083133726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083198092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083207691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.083269287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 cri-dockerd[1176]: time="2024-09-16T10:35:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0bec5171951368b86ff04b5574c9c5cdfeb4cdd6eacba6ff4068daf99eca6536/resolv.conf as [nameserver 192.169.0.1]"
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300265928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300536225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300575094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:35:09 ha-632000 dockerd[928]: time="2024-09-16T10:35:09.300839535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3cf37607dad84       c69fa2e9cbf5f                                                                              6 seconds ago        Running             coredns                   1                   0bec517195136       coredns-7c65d6cfc9-jkv8h
	bdd5bdc32cf5f       c69fa2e9cbf5f                                                                              6 seconds ago        Running             coredns                   1                   ae26258aa3023       coredns-7c65d6cfc9-kmtsq
	afaf715ba2192       12968670680f4                                                                              38 seconds ago       Running             kindnet-cni               1                   1d37d305e287a       kindnet-gs2gm
	35d5611bea628       60c005f310ff3                                                                              38 seconds ago       Running             kube-proxy                1                   2887b4fd64c30       kube-proxy-g95xd
	d6e8db98c320f       6e38f40d628db                                                                              38 seconds ago       Exited              storage-provisioner       1                   b4724bae016fb       storage-provisioner
	8e82c5b6f1445       9aa1fad941575                                                                              42 seconds ago       Running             kube-scheduler            1                   676f5e9f1e9a8       kube-scheduler-ha-632000
	b75c9f7049a91       175ffd71cce3d                                                                              42 seconds ago       Running             kube-controller-manager   1                   f69748e5377e8       kube-controller-manager-ha-632000
	69c5f26bd56fc       6bab7719df100                                                                              42 seconds ago       Running             kube-apiserver            1                   80bc5bb77c1d2       kube-apiserver-ha-632000
	8977e4d70f2ee       2e96e5913fc06                                                                              42 seconds ago       Running             etcd                      1                   650dad7d2a118       etcd-ha-632000
	9cdebd7b24ee9       c69fa2e9cbf5f                                                                              About a minute ago   Exited              coredns                   0                   943f7896df505       coredns-7c65d6cfc9-jkv8h
	1a9b8d16193d8       c69fa2e9cbf5f                                                                              About a minute ago   Exited              coredns                   0                   ea367824f7436       coredns-7c65d6cfc9-kmtsq
	da46c43d13fe8       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166   About a minute ago   Exited              kindnet-cni               0                   b16c72338b6ac       kindnet-gs2gm
	fae437a3e2407       60c005f310ff3                                                                              About a minute ago   Exited              kube-proxy                0                   7d1ed68d9ed61       kube-proxy-g95xd
	6dd1718ae6cb7       2e96e5913fc06                                                                              About a minute ago   Exited              etcd                      0                   61ff2b4664268       etcd-ha-632000
	69e9ce19f1363       9aa1fad941575                                                                              About a minute ago   Exited              kube-scheduler            0                   cf944230e79eb       kube-scheduler-ha-632000
	95857f7ef6fa1       6bab7719df100                                                                              About a minute ago   Exited              kube-apiserver            0                   3a54b2c915f89       kube-apiserver-ha-632000
	24a44bacb8b1e       175ffd71cce3d                                                                              About a minute ago   Exited              kube-controller-manager   0                   04ad9902d4d19       kube-controller-manager-ha-632000
	
	
	==> coredns [1a9b8d16193d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3cf37607dad8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [9cdebd7b24ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bdd5bdc32cf5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               ha-632000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-632000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-632000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T03_33_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:33:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-632000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:35:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:33:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:34:57 +0000   Mon, 16 Sep 2024 10:34:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-632000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 80143f0d20d747b59263aea5212f463f
	  System UUID:                151047b3-0000-0000-af67-8767f9a495bd
	  Boot ID:                    ef36167c-8045-4347-ac98-d12c1081351a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-jkv8h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     103s
	  kube-system                 coredns-7c65d6cfc9-kmtsq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     103s
	  kube-system                 etcd-ha-632000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         109s
	  kube-system                 kindnet-gs2gm                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      104s
	  kube-system                 kube-apiserver-ha-632000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-ha-632000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-g95xd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-ha-632000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 38s                kube-proxy       
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  109s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     108s               kubelet          Node ha-632000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node ha-632000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node ha-632000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           104s               node-controller  Node ha-632000 event: Registered Node ha-632000 in Controller
	  Normal  NodeReady                83s                kubelet          Node ha-632000 status is now: NodeReady
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node ha-632000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node ha-632000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)  kubelet          Node ha-632000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           37s                node-controller  Node ha-632000 event: Registered Node ha-632000 in Controller
	
	
	==> dmesg <==
	[  +0.008129] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691988] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006862] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.690338] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.223873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.475205] systemd-fstab-generator[469]: Ignoring "noauto" option for root device
	[  +0.095359] systemd-fstab-generator[481]: Ignoring "noauto" option for root device
	[  +1.258412] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.597885] systemd-fstab-generator[847]: Ignoring "noauto" option for root device
	[  +0.270736] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.094069] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.123685] systemd-fstab-generator[914]: Ignoring "noauto" option for root device
	[  +2.457317] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.083989] systemd-fstab-generator[1141]: Ignoring "noauto" option for root device
	[  +0.111440] systemd-fstab-generator[1153]: Ignoring "noauto" option for root device
	[  +0.118119] systemd-fstab-generator[1168]: Ignoring "noauto" option for root device
	[  +0.416274] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +1.886558] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +0.056602] kauditd_printk_skb: 261 callbacks suppressed
	[  +5.074379] kauditd_printk_skb: 95 callbacks suppressed
	[  +2.468516] systemd-fstab-generator[2276]: Ignoring "noauto" option for root device
	[Sep16 10:35] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [6dd1718ae6cb] <==
	{"level":"info","ts":"2024-09-16T10:33:24.707652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.707684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:33:24.712635Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-632000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:33:24.712748Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.714615Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.715358Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.717393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:33:24.717649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.717680Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:33:24.718273Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:33:24.719184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-09-16T10:33:24.719319Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.719511Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:33:24.723827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:34:03.002942Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:34:03.002999Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"ha-632000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	{"level":"warn","ts":"2024-09-16T10:34:03.003078Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:03.003173Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:03.045479Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:34:03.045527Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:34:03.045562Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-09-16T10:34:03.046683Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:03.046742Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:03.046750Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-632000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [8977e4d70f2e] <==
	{"level":"info","ts":"2024-09-16T10:34:33.842210Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:34:33.845570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860)"}
	{"level":"info","ts":"2024-09-16T10:34:33.845640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"b8c6c7563d17d844","added-peer-peer-urls":["https://192.169.0.5:2380"]}
	{"level":"info","ts":"2024-09-16T10:34:33.845775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:33.845847Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:34:33.849233Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:33.849265Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-09-16T10:34:33.849871Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b8c6c7563d17d844","initial-advertise-peer-urls":["https://192.169.0.5:2380"],"listen-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:34:33.849948Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:34:35.728265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:35.728302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:35.728333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
	{"level":"info","ts":"2024-09-16T10:34:35.728405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.728497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.728626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.728644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-09-16T10:34:35.730226Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-632000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:34:35.730416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:35.730682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:34:35.731522Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:35.731693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:34:35.732169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.732170Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:34:35.732813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-09-16T10:34:35.733036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:35:16 up 1 min,  0 users,  load average: 0.59, 0.19, 0.07
	Linux ha-632000 5.10.207 #1 SMP Sun Sep 15 20:39:46 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [afaf715ba219] <==
	I0916 10:34:37.992823       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:34:37.996117       1 main.go:139] hostIP = 192.169.0.5
	podIP = 192.169.0.5
	I0916 10:34:37.996293       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:34:37.996324       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:34:37.996333       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:34:38.586024       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0916 10:34:48.586883       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:34:48.587014       1 main.go:299] handling current node
	I0916 10:34:58.588144       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:34:58.588203       1 main.go:299] handling current node
	I0916 10:35:08.587084       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:35:08.587455       1 main.go:299] handling current node
	
	
	==> kindnet [da46c43d13fe] <==
	I0916 10:33:38.105068       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:33:38.105822       1 main.go:139] hostIP = 192.169.0.5
	podIP = 192.169.0.5
	I0916 10:33:38.105896       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:33:38.105923       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:33:38.105937       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:33:38.701206       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0916 10:33:48.705636       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:33:48.705736       1 main.go:299] handling current node
	I0916 10:33:58.706377       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0916 10:33:58.706440       1 main.go:299] handling current node
	
	
	==> kube-apiserver [69c5f26bd56f] <==
	I0916 10:34:36.621882       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:34:36.621893       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:34:36.621897       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:34:36.670791       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:34:36.671289       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:34:36.671573       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:34:36.672001       1 policy_source.go:224] refreshing policies
	I0916 10:34:36.670914       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:34:36.672338       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:34:36.672470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:34:36.672555       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:34:36.672568       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:34:36.672982       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0916 10:34:36.676469       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:34:36.677144       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:34:36.724659       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:34:36.740613       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:34:37.574461       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:34:38.554987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:34:38.686490       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:34:38.697546       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:34:38.869795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:34:38.876774       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:34:40.310037       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:34:40.380264       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [95857f7ef6fa] <==
	W0916 10:34:03.020165       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020189       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020209       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020256       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020305       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020352       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020374       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020395       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020441       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020489       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020510       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020562       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020590       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020643       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020679       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020730       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020782       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020814       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020842       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020893       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.020948       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.023259       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.023329       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.023360       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 10:34:03.029700       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [24a44bacb8b1] <==
	I0916 10:33:32.792285       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.813766       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:32.836606       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:33:32.836825       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:33:33.208030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283196       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:33:33.283305       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:33:33.489386       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:33.785544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="393.158805ms"
	I0916 10:33:33.795508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.792999ms"
	I0916 10:33:33.796330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.465µs"
	I0916 10:33:33.796593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.299µs"
	I0916 10:33:53.221882       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.228026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:33:53.232440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.065µs"
	I0916 10:33:53.240523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.002µs"
	I0916 10:33:53.245409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.983µs"
	I0916 10:33:53.253146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="32.846µs"
	I0916 10:33:54.145943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.356µs"
	I0916 10:33:54.168878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.87555ms"
	I0916 10:33:54.169182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.016µs"
	I0916 10:33:54.179810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.838071ms"
	I0916 10:33:54.180086       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.84µs"
	I0916 10:33:57.576580       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 10:33:58.801252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	
	
	==> kube-controller-manager [b75c9f7049a9] <==
	I0916 10:34:39.965965       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:34:39.966418       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:34:39.969054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.999421ms"
	I0916 10:34:39.969135       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.787µs"
	I0916 10:34:39.989530       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:34:40.011735       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:34:40.044676       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:34:40.057740       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:34:40.062365       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:34:40.071587       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:40.095503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:34:40.139376       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:34:40.186827       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:34:40.589219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:40.667214       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:34:40.667250       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:34:57.405752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:34:57.412393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:35:00.117952       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 10:35:00.118253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-632000"
	I0916 10:35:09.452642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.695µs"
	I0916 10:35:09.486605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="10.460496ms"
	I0916 10:35:09.486799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="85.542µs"
	I0916 10:35:09.494578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="5.669414ms"
	I0916 10:35:09.495670       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.343µs"
	
	
	==> kube-proxy [35d5611bea62] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:34:37.766945       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:34:37.780385       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0916 10:34:37.780815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:34:37.812489       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:34:37.812528       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:34:37.812545       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:34:37.814902       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:34:37.815438       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:34:37.815466       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:34:37.817720       1 config.go:199] "Starting service config controller"
	I0916 10:34:37.817748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:34:37.817763       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:34:37.817766       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:34:37.818278       1 config.go:328] "Starting node config controller"
	I0916 10:34:37.818305       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:34:37.918323       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:34:37.918683       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:34:37.919059       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fae437a3e240] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 10:33:34.691611       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 10:33:34.697203       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0916 10:33:34.697256       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:33:34.722746       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 10:33:34.723060       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 10:33:34.723106       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:33:34.725578       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:33:34.725970       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:33:34.725998       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:33:34.726895       1 config.go:199] "Starting service config controller"
	I0916 10:33:34.726932       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:33:34.726949       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:33:34.726952       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:33:34.727508       1 config.go:328] "Starting node config controller"
	I0916 10:33:34.727534       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:33:34.827447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:33:34.827457       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:33:34.827564       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69e9ce19f136] <==
	E0916 10:33:25.865183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:25.865306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.862775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:25.865397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:33:25.865433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:25.863621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:25.865448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.708747       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:33:26.708911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.709690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:33:26.709836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.726430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:33:26.726462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.767336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:33:26.767420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.769370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:33:26.769455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:33:26.794289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:33:26.794346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:33:27.460783       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:34:02.987165       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8e82c5b6f144] <==
	W0916 10:34:36.639348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:34:36.639411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.639520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:34:36.639583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.639682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:34:36.639715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.639882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:34:36.640007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.640239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:34:36.640271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.640398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:34:36.640470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.640643       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:34:36.640675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:34:36.641643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:34:36.641661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641670       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:34:36.641812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.641827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:34:36.641833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:34:36.643637       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:34:36.643672       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:34:38.124778       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:34:41 ha-632000 kubelet[1437]: E0916 10:34:41.998266    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:42 ha-632000 kubelet[1437]: E0916 10:34:42.998903    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:43 ha-632000 kubelet[1437]: E0916 10:34:43.030466    1437 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Sep 16 10:34:43 ha-632000 kubelet[1437]: E0916 10:34:43.999614    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618095    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618289    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618332    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume podName:b174db06-0ade-4594-8b4b-6058301491f8 nodeName:}" failed. No retries permitted until 2024-09-16 10:34:52.61831104 +0000 UTC m=+19.750575570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume") pod "coredns-7c65d6cfc9-jkv8h" (UID: "b174db06-0ade-4594-8b4b-6058301491f8") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.618540    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume podName:535c3a5b-013f-4bdc-94c4-b109c6f43b0b nodeName:}" failed. No retries permitted until 2024-09-16 10:34:52.618517321 +0000 UTC m=+19.750781848 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume") pod "coredns-7c65d6cfc9-kmtsq" (UID: "535c3a5b-013f-4bdc-94c4-b109c6f43b0b") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:44 ha-632000 kubelet[1437]: E0916 10:34:44.998247    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:46 ha-632000 kubelet[1437]: E0916 10:34:46.000383    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:46 ha-632000 kubelet[1437]: E0916 10:34:46.997866    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:47 ha-632000 kubelet[1437]: E0916 10:34:47.997933    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:48 ha-632000 kubelet[1437]: E0916 10:34:48.034733    1437 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Sep 16 10:34:48 ha-632000 kubelet[1437]: E0916 10:34:48.997973    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:49 ha-632000 kubelet[1437]: E0916 10:34:49.997909    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:50 ha-632000 kubelet[1437]: E0916 10:34:50.998946    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:34:51 ha-632000 kubelet[1437]: E0916 10:34:51.999476    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-jkv8h" podUID="b174db06-0ade-4594-8b4b-6058301491f8"
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705368    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705538    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume podName:535c3a5b-013f-4bdc-94c4-b109c6f43b0b nodeName:}" failed. No retries permitted until 2024-09-16 10:35:08.705508953 +0000 UTC m=+35.837773486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/535c3a5b-013f-4bdc-94c4-b109c6f43b0b-config-volume") pod "coredns-7c65d6cfc9-kmtsq" (UID: "535c3a5b-013f-4bdc-94c4-b109c6f43b0b") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705715    1437 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.705923    1437 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume podName:b174db06-0ade-4594-8b4b-6058301491f8 nodeName:}" failed. No retries permitted until 2024-09-16 10:35:08.705902918 +0000 UTC m=+35.838167449 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b174db06-0ade-4594-8b4b-6058301491f8-config-volume") pod "coredns-7c65d6cfc9-jkv8h" (UID: "b174db06-0ade-4594-8b4b-6058301491f8") : object "kube-system"/"coredns" not registered
	Sep 16 10:34:52 ha-632000 kubelet[1437]: E0916 10:34:52.998708    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-kmtsq" podUID="535c3a5b-013f-4bdc-94c4-b109c6f43b0b"
	Sep 16 10:35:08 ha-632000 kubelet[1437]: I0916 10:35:08.423160    1437 scope.go:117] "RemoveContainer" containerID="b3d5179d61251454ae7a85db0a0429a411cae76733938adfbbf5ba05e89d0a44"
	Sep 16 10:35:08 ha-632000 kubelet[1437]: I0916 10:35:08.423361    1437 scope.go:117] "RemoveContainer" containerID="d6e8db98c320f7b73ae8dbcf3185c03d9d231275048e6397a177256221c6a1ba"
	Sep 16 10:35:08 ha-632000 kubelet[1437]: E0916 10:35:08.423439    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75558cac-fdbb-4a54-94c5-31f3f3d980c3)\"" pod="kube-system/storage-provisioner" podUID="75558cac-fdbb-4a54-94c5-31f3f3d980c3"
	
	
	==> storage-provisioner [d6e8db98c320] <==
	I0916 10:34:37.541990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:35:07.552356       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-632000 -n ha-632000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-632000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (338.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-632000 --control-plane -v=7 --alsologtostderr
E0916 03:35:42.327304    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:37:58.460015    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:38:26.169619    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-632000 --control-plane -v=7 --alsologtostderr: exit status 48 (5m23.059801121s)

                                                
                                                
-- stdout --
	* Adding node m02 to cluster ha-632000 as [worker control-plane]
	* Starting "ha-632000-m02" control-plane node in "ha-632000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:35:18.512505    3372 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:35:18.512799    3372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:35:18.512804    3372 out.go:358] Setting ErrFile to fd 2...
	I0916 03:35:18.512808    3372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:35:18.512987    3372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:35:18.513342    3372 mustload.go:65] Loading cluster: ha-632000
	I0916 03:35:18.513651    3372 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:35:18.514005    3372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:35:18.514049    3372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:35:18.522267    3372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51123
	I0916 03:35:18.522672    3372 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:35:18.523079    3372 main.go:141] libmachine: Using API Version  1
	I0916 03:35:18.523109    3372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:35:18.523353    3372 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:35:18.523506    3372 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:35:18.523594    3372 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:18.523656    3372 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:35:18.524620    3372 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:35:18.524889    3372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:35:18.524914    3372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:35:18.533279    3372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51125
	I0916 03:35:18.533597    3372 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:35:18.533899    3372 main.go:141] libmachine: Using API Version  1
	I0916 03:35:18.533910    3372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:35:18.534160    3372 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:35:18.534279    3372 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:35:18.534381    3372 api_server.go:166] Checking apiserver status ...
	I0916 03:35:18.534443    3372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:35:18.534463    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:35:18.534537    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:35:18.534613    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:35:18.534687    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:35:18.534764    3372 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:35:18.573892    3372 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1764/cgroup
	W0916 03:35:18.582164    3372 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1764/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:35:18.582220    3372 ssh_runner.go:195] Run: ls
	I0916 03:35:18.585491    3372 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0916 03:35:18.588723    3372 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	W0916 03:35:18.588763    3372 out.go:270] X Adding a control-plane node to a non-HA (non-multi-control plane) cluster is not currently supported. Please first delete the cluster and use 'minikube start --ha' to create new one.
	X Adding a control-plane node to a non-HA (non-multi-control plane) cluster is not currently supported. Please first delete the cluster and use 'minikube start --ha' to create new one.
	I0916 03:35:18.610304    3372 out.go:177] * Adding node m02 to cluster ha-632000 as [worker control-plane]
	I0916 03:35:18.631123    3372 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 03:35:18.631445    3372 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:35:18.631543    3372 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:35:18.653252    3372 out.go:177] * Starting "ha-632000-m02" control-plane node in "ha-632000" cluster
	I0916 03:35:18.674037    3372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:35:18.674089    3372 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:35:18.674108    3372 cache.go:56] Caching tarball of preloaded images
	I0916 03:35:18.674263    3372 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 03:35:18.674276    3372 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 03:35:18.674341    3372 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:35:18.674938    3372 start.go:360] acquireMachinesLock for ha-632000-m02: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 03:35:18.675014    3372 start.go:364] duration metric: took 47.973µs to acquireMachinesLock for "ha-632000-m02"
	I0916 03:35:18.675041    3372 start.go:93] Provisioning new machine with config: &{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0916 03:35:18.675121    3372 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0916 03:35:18.696144    3372 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 03:35:18.696359    3372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:35:18.696390    3372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:35:18.704825    3372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51129
	I0916 03:35:18.705167    3372 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:35:18.705505    3372 main.go:141] libmachine: Using API Version  1
	I0916 03:35:18.705514    3372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:35:18.705740    3372 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:35:18.705849    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetMachineName
	I0916 03:35:18.705938    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:18.706055    3372 start.go:159] libmachine.API.Create for "ha-632000" (driver="hyperkit")
	I0916 03:35:18.706078    3372 client.go:168] LocalClient.Create starting
	I0916 03:35:18.706130    3372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 03:35:18.706175    3372 main.go:141] libmachine: Decoding PEM data...
	I0916 03:35:18.706194    3372 main.go:141] libmachine: Parsing certificate...
	I0916 03:35:18.706256    3372 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 03:35:18.706287    3372 main.go:141] libmachine: Decoding PEM data...
	I0916 03:35:18.706297    3372 main.go:141] libmachine: Parsing certificate...
	I0916 03:35:18.706309    3372 main.go:141] libmachine: Running pre-create checks...
	I0916 03:35:18.706314    3372 main.go:141] libmachine: (ha-632000-m02) Calling .PreCreateCheck
	I0916 03:35:18.706398    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:18.706431    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetConfigRaw
	I0916 03:35:18.706873    3372 main.go:141] libmachine: Creating machine...
	I0916 03:35:18.706881    3372 main.go:141] libmachine: (ha-632000-m02) Calling .Create
	I0916 03:35:18.706952    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:18.707066    3372 main.go:141] libmachine: (ha-632000-m02) DBG | I0916 03:35:18.706948    3376 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:35:18.707113    3372 main.go:141] libmachine: (ha-632000-m02) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 03:35:19.069010    3372 main.go:141] libmachine: (ha-632000-m02) DBG | I0916 03:35:19.068914    3376 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/id_rsa...
	I0916 03:35:19.284379    3372 main.go:141] libmachine: (ha-632000-m02) DBG | I0916 03:35:19.284314    3376 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/ha-632000-m02.rawdisk...
	I0916 03:35:19.284400    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Writing magic tar header
	I0916 03:35:19.284410    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Writing SSH key tar header
	I0916 03:35:19.285004    3372 main.go:141] libmachine: (ha-632000-m02) DBG | I0916 03:35:19.284975    3376 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02 ...
	I0916 03:35:19.710478    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:19.710494    3372 main.go:141] libmachine: (ha-632000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/hyperkit.pid
	I0916 03:35:19.710551    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Using UUID 463272f8-7d08-46d2-9169-06d5339aa804
	I0916 03:35:19.736202    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Generated MAC c6:8c:d2:f3:2d:a4
	I0916 03:35:19.736232    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000
	I0916 03:35:19.736308    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"463272f8-7d08-46d2-9169-06d5339aa804", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:35:19.736348    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"463272f8-7d08-46d2-9169-06d5339aa804", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 03:35:19.736402    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "463272f8-7d08-46d2-9169-06d5339aa804", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/ha-632000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machine
s/ha-632000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"}
	I0916 03:35:19.736453    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 463272f8-7d08-46d2-9169-06d5339aa804 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/ha-632000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-632000"
	I0916 03:35:19.736466    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 03:35:19.739488    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 DEBUG: hyperkit: Pid is 3377
	I0916 03:35:19.740014    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Attempt 0
	I0916 03:35:19.740028    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:19.740081    3372 main.go:141] libmachine: (ha-632000-m02) DBG | hyperkit pid from json: 3377
	I0916 03:35:19.741014    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Searching for c6:8c:d2:f3:2d:a4 in /var/db/dhcpd_leases ...
	I0916 03:35:19.741070    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:35:19.741083    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 03:35:19.741112    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:35:19.741129    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:35:19.741140    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:35:19.746898    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 03:35:19.758481    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 03:35:19.759260    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:35:19.759278    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:35:19.759325    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:35:19.759344    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:35:20.143154    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 03:35:20.143174    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 03:35:20.258095    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 03:35:20.258116    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 03:35:20.258131    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 03:35:20.258147    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 03:35:20.258975    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 03:35:20.258985    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 03:35:21.741668    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Attempt 1
	I0916 03:35:21.741684    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:21.741741    3372 main.go:141] libmachine: (ha-632000-m02) DBG | hyperkit pid from json: 3377
	I0916 03:35:21.742536    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Searching for c6:8c:d2:f3:2d:a4 in /var/db/dhcpd_leases ...
	I0916 03:35:21.742591    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:35:21.742606    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 03:35:21.742620    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:35:21.742628    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:35:21.742647    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:35:23.744693    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Attempt 2
	I0916 03:35:23.744708    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:23.744810    3372 main.go:141] libmachine: (ha-632000-m02) DBG | hyperkit pid from json: 3377
	I0916 03:35:23.745675    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Searching for c6:8c:d2:f3:2d:a4 in /var/db/dhcpd_leases ...
	I0916 03:35:23.745721    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:35:23.745735    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 03:35:23.745752    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:35:23.745765    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:35:23.745777    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:35:25.746578    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Attempt 3
	I0916 03:35:25.746599    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:25.746681    3372 main.go:141] libmachine: (ha-632000-m02) DBG | hyperkit pid from json: 3377
	I0916 03:35:25.747451    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Searching for c6:8c:d2:f3:2d:a4 in /var/db/dhcpd_leases ...
	I0916 03:35:25.747468    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:35:25.747475    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 03:35:25.747512    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:35:25.747523    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:35:25.747541    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:35:25.851205    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0916 03:35:25.851281    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0916 03:35:25.851288    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0916 03:35:25.875176    3372 main.go:141] libmachine: (ha-632000-m02) DBG | 2024/09/16 03:35:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0916 03:35:27.748919    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Attempt 4
	I0916 03:35:27.748933    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:27.749027    3372 main.go:141] libmachine: (ha-632000-m02) DBG | hyperkit pid from json: 3377
	I0916 03:35:27.749802    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Searching for c6:8c:d2:f3:2d:a4 in /var/db/dhcpd_leases ...
	I0916 03:35:27.749849    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0916 03:35:27.749861    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 03:35:27.749874    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 03:35:27.749899    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 03:35:27.749906    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 03:35:29.751092    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Attempt 5
	I0916 03:35:29.751108    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:29.751196    3372 main.go:141] libmachine: (ha-632000-m02) DBG | hyperkit pid from json: 3377
	I0916 03:35:29.751967    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Searching for c6:8c:d2:f3:2d:a4 in /var/db/dhcpd_leases ...
	I0916 03:35:29.752028    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0916 03:35:29.752041    3372 main.go:141] libmachine: (ha-632000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e95b70}
	I0916 03:35:29.752050    3372 main.go:141] libmachine: (ha-632000-m02) DBG | Found match: c6:8c:d2:f3:2d:a4
	I0916 03:35:29.752056    3372 main.go:141] libmachine: (ha-632000-m02) DBG | IP: 192.169.0.6
	I0916 03:35:29.752090    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetConfigRaw
	I0916 03:35:29.752658    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:29.752754    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:29.752857    3372 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 03:35:29.752870    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetState
	I0916 03:35:29.752956    3372 main.go:141] libmachine: (ha-632000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:29.753007    3372 main.go:141] libmachine: (ha-632000-m02) DBG | hyperkit pid from json: 3377
	I0916 03:35:29.753758    3372 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 03:35:29.753779    3372 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 03:35:29.753784    3372 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 03:35:29.753789    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:29.753862    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:29.753941    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:29.754013    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:29.754124    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:29.754239    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:29.754433    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:29.754441    3372 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 03:35:30.813352    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:35:30.813374    3372 main.go:141] libmachine: Detecting the provisioner...
	I0916 03:35:30.813380    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:30.813503    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:30.813613    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:30.813721    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:30.813814    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:30.813949    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:30.814096    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:30.814103    3372 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 03:35:30.873265    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 03:35:30.873312    3372 main.go:141] libmachine: found compatible host: buildroot
	I0916 03:35:30.873318    3372 main.go:141] libmachine: Provisioning with buildroot...
	I0916 03:35:30.873323    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetMachineName
	I0916 03:35:30.873466    3372 buildroot.go:166] provisioning hostname "ha-632000-m02"
	I0916 03:35:30.873477    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetMachineName
	I0916 03:35:30.873560    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:30.873646    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:30.873718    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:30.873797    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:30.873888    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:30.874026    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:30.874173    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:30.874181    3372 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-632000-m02 && echo "ha-632000-m02" | sudo tee /etc/hostname
	I0916 03:35:30.943795    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-632000-m02
	
	I0916 03:35:30.943818    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:30.943956    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:30.944058    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:30.944150    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:30.944244    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:30.944370    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:30.944518    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:30.944528    3372 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-632000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-632000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-632000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 03:35:31.007675    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 03:35:31.007699    3372 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 03:35:31.007717    3372 buildroot.go:174] setting up certificates
	I0916 03:35:31.007724    3372 provision.go:84] configureAuth start
	I0916 03:35:31.007732    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetMachineName
	I0916 03:35:31.007872    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetIP
	I0916 03:35:31.007984    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:31.008084    3372 provision.go:143] copyHostCerts
	I0916 03:35:31.008117    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:35:31.008172    3372 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem, removing ...
	I0916 03:35:31.008178    3372 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 03:35:31.008615    3372 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 03:35:31.008841    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:35:31.008872    3372 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem, removing ...
	I0916 03:35:31.008878    3372 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 03:35:31.008959    3372 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 03:35:31.009105    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:35:31.009133    3372 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem, removing ...
	I0916 03:35:31.009138    3372 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 03:35:31.009211    3372 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 03:35:31.009363    3372 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.ha-632000-m02 san=[127.0.0.1 192.169.0.6 ha-632000-m02 localhost minikube]
	I0916 03:35:31.209755    3372 provision.go:177] copyRemoteCerts
	I0916 03:35:31.209808    3372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 03:35:31.209822    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:31.209971    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:31.210068    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:31.210156    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:31.210237    3372 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/id_rsa Username:docker}
	I0916 03:35:31.246458    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 03:35:31.246542    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 03:35:31.268019    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 03:35:31.268083    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 03:35:31.288346    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 03:35:31.288417    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 03:35:31.308422    3372 provision.go:87] duration metric: took 205.429329ms to configureAuth
	I0916 03:35:31.308437    3372 buildroot.go:189] setting minikube options for container-runtime
	I0916 03:35:31.308584    3372 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:35:31.308597    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:31.308732    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:31.308819    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:31.308906    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:31.308991    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:31.309061    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:31.309202    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:31.309329    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:31.309337    3372 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 03:35:31.368324    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 03:35:31.368336    3372 buildroot.go:70] root file system type: tmpfs
	I0916 03:35:31.368411    3372 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 03:35:31.368423    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:31.368552    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:31.368640    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:31.368760    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:31.368850    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:31.369001    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:31.369155    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:31.369201    3372 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 03:35:31.439118    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 03:35:31.439140    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:31.439271    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:31.439363    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:31.439443    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:31.439529    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:31.439655    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:31.439790    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:31.439802    3372 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 03:35:32.971141    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 03:35:32.971156    3372 main.go:141] libmachine: Checking connection to Docker...
	I0916 03:35:32.971163    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetURL
	I0916 03:35:32.971302    3372 main.go:141] libmachine: Docker is up and running!
	I0916 03:35:32.971311    3372 main.go:141] libmachine: Reticulating splines...
	I0916 03:35:32.971315    3372 client.go:171] duration metric: took 14.170182266s to LocalClient.Create
	I0916 03:35:32.971331    3372 start.go:167] duration metric: took 14.170226502s to libmachine.API.Create "ha-632000"
	I0916 03:35:32.971343    3372 start.go:293] postStartSetup for "ha-632000-m02" (driver="hyperkit")
	I0916 03:35:32.971353    3372 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 03:35:32.971364    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:32.971509    3372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 03:35:32.971527    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:32.971625    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:32.971703    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:32.971781    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:32.971867    3372 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/id_rsa Username:docker}
	I0916 03:35:33.010563    3372 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 03:35:33.014031    3372 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 03:35:33.014049    3372 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 03:35:33.014149    3372 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 03:35:33.014310    3372 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> 16492.pem in /etc/ssl/certs
	I0916 03:35:33.014318    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /etc/ssl/certs/16492.pem
	I0916 03:35:33.014502    3372 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 03:35:33.027596    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:35:33.057730    3372 start.go:296] duration metric: took 86.375914ms for postStartSetup
	I0916 03:35:33.057763    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetConfigRaw
	I0916 03:35:33.058362    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetIP
	I0916 03:35:33.058502    3372 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/config.json ...
	I0916 03:35:33.058801    3372 start.go:128] duration metric: took 14.288606542s to createHost
	I0916 03:35:33.058815    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:33.058913    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:33.059000    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:33.059081    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:33.059161    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:33.059282    3372 main.go:141] libmachine: Using SSH client type: native
	I0916 03:35:33.059409    3372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa81b820] 0xa81e500 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0916 03:35:33.059416    3372 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 03:35:33.117132    3372 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726482932.852625623
	
	I0916 03:35:33.117155    3372 fix.go:216] guest clock: 1726482932.852625623
	I0916 03:35:33.117161    3372 fix.go:229] Guest: 2024-09-16 03:35:32.852625623 -0700 PDT Remote: 2024-09-16 03:35:33.058809 -0700 PDT m=+14.486823607 (delta=-206.183377ms)
	I0916 03:35:33.117187    3372 fix.go:200] guest clock delta is within tolerance: -206.183377ms
	I0916 03:35:33.117192    3372 start.go:83] releasing machines lock for "ha-632000-m02", held for 14.347119202s
	I0916 03:35:33.117207    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:33.117334    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetIP
	I0916 03:35:33.117429    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:33.117720    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:33.117815    3372 main.go:141] libmachine: (ha-632000-m02) Calling .DriverName
	I0916 03:35:33.117911    3372 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 03:35:33.117940    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:33.117957    3372 ssh_runner.go:195] Run: systemctl --version
	I0916 03:35:33.117967    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHHostname
	I0916 03:35:33.118028    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:33.118046    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHPort
	I0916 03:35:33.118140    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:33.118153    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHKeyPath
	I0916 03:35:33.118213    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:33.118244    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetSSHUsername
	I0916 03:35:33.118295    3372 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/id_rsa Username:docker}
	I0916 03:35:33.118316    3372 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000-m02/id_rsa Username:docker}
	I0916 03:35:33.152443    3372 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 03:35:33.197901    3372 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 03:35:33.197986    3372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 03:35:33.210939    3372 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 03:35:33.210954    3372 start.go:495] detecting cgroup driver to use...
	I0916 03:35:33.211054    3372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:35:33.225835    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 03:35:33.234077    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 03:35:33.242490    3372 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 03:35:33.242537    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 03:35:33.251223    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:35:33.260046    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 03:35:33.268780    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 03:35:33.277645    3372 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 03:35:33.286734    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 03:35:33.295612    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 03:35:33.304327    3372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 03:35:33.313152    3372 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 03:35:33.321018    3372 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 03:35:33.328902    3372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:35:33.421828    3372 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 03:35:33.440364    3372 start.go:495] detecting cgroup driver to use...
	I0916 03:35:33.440455    3372 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 03:35:33.452921    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:35:33.465353    3372 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 03:35:33.479478    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 03:35:33.490028    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:35:33.499892    3372 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 03:35:33.520855    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 03:35:33.531469    3372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 03:35:33.546949    3372 ssh_runner.go:195] Run: which cri-dockerd
	I0916 03:35:33.549993    3372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 03:35:33.557385    3372 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 03:35:33.570779    3372 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 03:35:33.667015    3372 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 03:35:33.777447    3372 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 03:35:33.777515    3372 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 03:35:33.792911    3372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:35:33.898190    3372 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 03:35:36.227695    3372 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32947939s)
	I0916 03:35:36.227778    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 03:35:36.238799    3372 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0916 03:35:36.252099    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:35:36.263656    3372 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 03:35:36.358016    3372 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 03:35:36.473864    3372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:35:36.586656    3372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 03:35:36.600255    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 03:35:36.611070    3372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:35:36.707370    3372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 03:35:36.763101    3372 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 03:35:36.763191    3372 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 03:35:36.767733    3372 start.go:563] Will wait 60s for crictl version
	I0916 03:35:36.767787    3372 ssh_runner.go:195] Run: which crictl
	I0916 03:35:36.771704    3372 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 03:35:36.797187    3372 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 03:35:36.797276    3372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:35:36.818621    3372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 03:35:36.861782    3372 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 03:35:36.861828    3372 main.go:141] libmachine: (ha-632000-m02) Calling .GetIP
	I0916 03:35:36.862289    3372 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0916 03:35:36.866919    3372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:35:36.877253    3372 mustload.go:65] Loading cluster: ha-632000
	I0916 03:35:36.877405    3372 config.go:182] Loaded profile config "ha-632000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:35:36.877627    3372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:35:36.877649    3372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:35:36.886251    3372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51152
	I0916 03:35:36.886610    3372 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:35:36.886933    3372 main.go:141] libmachine: Using API Version  1
	I0916 03:35:36.886949    3372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:35:36.887182    3372 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:35:36.887311    3372 main.go:141] libmachine: (ha-632000) Calling .GetState
	I0916 03:35:36.887401    3372 main.go:141] libmachine: (ha-632000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:35:36.887474    3372 main.go:141] libmachine: (ha-632000) DBG | hyperkit pid from json: 3317
	I0916 03:35:36.888401    3372 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:35:36.888669    3372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:35:36.888696    3372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:35:36.897377    3372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51154
	I0916 03:35:36.897701    3372 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:35:36.898029    3372 main.go:141] libmachine: Using API Version  1
	I0916 03:35:36.898040    3372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:35:36.898241    3372 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:35:36.898353    3372 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:35:36.898457    3372 certs.go:68] Setting up /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000 for IP: 192.169.0.6
	I0916 03:35:36.898465    3372 certs.go:194] generating shared ca certs ...
	I0916 03:35:36.898475    3372 certs.go:226] acquiring lock for ca certs: {Name:mkbe6743287669047aae1be1aaf47d24793fafa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:35:36.898631    3372 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key
	I0916 03:35:36.898682    3372 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key
	I0916 03:35:36.898700    3372 certs.go:256] generating profile certs ...
	I0916 03:35:36.898804    3372 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/client.key
	I0916 03:35:36.898824    3372 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.a7439124
	I0916 03:35:36.898842    3372 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.a7439124 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 <nil>]
	I0916 03:35:37.225739    3372 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.a7439124 ...
	I0916 03:35:37.225763    3372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.a7439124: {Name:mk0c0ad95a087cac1971240936e11622ed160562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:35:37.226110    3372 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.a7439124 ...
	I0916 03:35:37.226120    3372 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.a7439124: {Name:mk3e0ce920434a82781c836500e63acba3c8f190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 03:35:37.226351    3372 certs.go:381] copying /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt.a7439124 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt
	I0916 03:35:37.228396    3372 certs.go:385] copying /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key.a7439124 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key
	I0916 03:35:37.228786    3372 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key
	I0916 03:35:37.228798    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 03:35:37.228822    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 03:35:37.228841    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 03:35:37.228860    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 03:35:37.228878    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 03:35:37.228900    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 03:35:37.228920    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 03:35:37.228940    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 03:35:37.229076    3372 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem (1338 bytes)
	W0916 03:35:37.229128    3372 certs.go:480] ignoring /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649_empty.pem, impossibly tiny 0 bytes
	I0916 03:35:37.229136    3372 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 03:35:37.229167    3372 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem (1082 bytes)
	I0916 03:35:37.229196    3372 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem (1123 bytes)
	I0916 03:35:37.229226    3372 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem (1675 bytes)
	I0916 03:35:37.229292    3372 certs.go:484] found cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem (1708 bytes)
	I0916 03:35:37.229324    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem -> /usr/share/ca-certificates/1649.pem
	I0916 03:35:37.229349    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> /usr/share/ca-certificates/16492.pem
	I0916 03:35:37.229369    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:35:37.229405    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:35:37.229554    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:35:37.229662    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:35:37.229742    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:35:37.229819    3372 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:35:37.257613    3372 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 03:35:37.260849    3372 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 03:35:37.269462    3372 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 03:35:37.272502    3372 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 03:35:37.281017    3372 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 03:35:37.284340    3372 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 03:35:37.292772    3372 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 03:35:37.295836    3372 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 03:35:37.304419    3372 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 03:35:37.307459    3372 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 03:35:37.316162    3372 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 03:35:37.319210    3372 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 03:35:37.327955    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 03:35:37.348299    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 03:35:37.369093    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 03:35:37.388811    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 03:35:37.408705    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 03:35:37.429480    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 03:35:37.449452    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 03:35:37.469496    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/ha-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 03:35:37.489496    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/1649.pem --> /usr/share/ca-certificates/1649.pem (1338 bytes)
	I0916 03:35:37.510225    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /usr/share/ca-certificates/16492.pem (1708 bytes)
	I0916 03:35:37.529980    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 03:35:37.550206    3372 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 03:35:37.564901    3372 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 03:35:37.578564    3372 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 03:35:37.592087    3372 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 03:35:37.605780    3372 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 03:35:37.620453    3372 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 03:35:37.634294    3372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 03:35:37.647724    3372 ssh_runner.go:195] Run: openssl version
	I0916 03:35:37.651928    3372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1649.pem && ln -fs /usr/share/ca-certificates/1649.pem /etc/ssl/certs/1649.pem"
	I0916 03:35:37.660264    3372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1649.pem
	I0916 03:35:37.663643    3372 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:24 /usr/share/ca-certificates/1649.pem
	I0916 03:35:37.663687    3372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1649.pem
	I0916 03:35:37.667897    3372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1649.pem /etc/ssl/certs/51391683.0"
	I0916 03:35:37.676484    3372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16492.pem && ln -fs /usr/share/ca-certificates/16492.pem /etc/ssl/certs/16492.pem"
	I0916 03:35:37.685829    3372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16492.pem
	I0916 03:35:37.689333    3372 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:24 /usr/share/ca-certificates/16492.pem
	I0916 03:35:37.689378    3372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16492.pem
	I0916 03:35:37.693805    3372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16492.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 03:35:37.702213    3372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 03:35:37.710608    3372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:35:37.714251    3372 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:35:37.714316    3372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 03:35:37.718730    3372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 03:35:37.727194    3372 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 03:35:37.730316    3372 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 03:35:37.730363    3372 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.1  true true} ...
	I0916 03:35:37.730445    3372 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-632000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 03:35:37.730472    3372 kube-vip.go:115] generating kube-vip config ...
	I0916 03:35:37.730507    3372 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 03:35:37.742792    3372 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 03:35:37.742868    3372 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 03:35:37.742927    3372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 03:35:37.751304    3372 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 03:35:37.751369    3372 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 03:35:37.759198    3372 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 03:35:37.759203    3372 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 03:35:37.759205    3372 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubectl
	I0916 03:35:40.030574    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 03:35:40.030672    3372 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 03:35:40.034103    3372 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 03:35:40.034122    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 03:35:40.651538    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 03:35:40.651636    3372 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 03:35:40.655127    3372 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 03:35:40.655150    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 03:35:41.229829    3372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:35:41.240624    3372 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 03:35:41.240760    3372 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 03:35:41.244918    3372 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 03:35:41.244952    3372 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 03:35:41.465910    3372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 03:35:41.473253    3372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0916 03:35:41.486988    3372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 03:35:41.500362    3372 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1427 bytes)
	I0916 03:35:41.514104    3372 ssh_runner.go:195] Run: grep <nil>	control-plane.minikube.internal$ /etc/hosts
	I0916 03:35:41.517148    3372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "<nil>	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 03:35:41.526771    3372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 03:35:41.622823    3372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 03:35:41.638916    3372 host.go:66] Checking if "ha-632000" exists ...
	I0916 03:35:41.639203    3372 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:35:41.639227    3372 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:35:41.648359    3372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51181
	I0916 03:35:41.648716    3372 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:35:41.649072    3372 main.go:141] libmachine: Using API Version  1
	I0916 03:35:41.649084    3372 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:35:41.649303    3372 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:35:41.649417    3372 main.go:141] libmachine: (ha-632000) Calling .DriverName
	I0916 03:35:41.649508    3372 start.go:317] joinCluster: &{Name:ha-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:35:41.649571    3372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 03:35:41.649583    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHHostname
	I0916 03:35:41.649680    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHPort
	I0916 03:35:41.649772    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHKeyPath
	I0916 03:35:41.649868    3372 main.go:141] libmachine: (ha-632000) Calling .GetSSHUsername
	I0916 03:35:41.649955    3372 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/ha-632000/id_rsa Username:docker}
	I0916 03:35:41.733536    3372 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0916 03:35:41.733575    3372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p2agho.0x8r1lc9fc76guj5 --discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-632000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I0916 03:40:41.472403    3372 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p2agho.0x8r1lc9fc76guj5 --discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-632000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (4m59.738582822s)
	E0916 03:40:41.472455    3372 start.go:345] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p2agho.0x8r1lc9fc76guj5 --discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-632000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: couldn't validate the identity of the API Server: failed to request the cluster-info ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp: lookup control-plane.minikube.internal on 192.169.0.1:53: no such host
	To see the stack trace of this error execute with --v=5 or higher
	I0916 03:40:41.472467    3372 start.go:348] resetting control-plane node "m02" before attempting to rejoin cluster...
	I0916 03:40:41.472482    3372 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force"
	I0916 03:40:41.524480    3372 start.go:352] successfully reset control-plane node "m02"
	I0916 03:40:41.524520    3372 start.go:319] duration metric: took 4m59.874796348s to joinCluster
	I0916 03:40:41.546561    3372 out.go:201] 
	W0916 03:40:41.568119    3372 out.go:270] X Exiting due to INET_LOOKUP_HOST: join node to cluster: error joining control-plane node "m02" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p2agho.0x8r1lc9fc76guj5 --discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-632000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: couldn't validate the identity of the API Server: failed to request the cluster-info ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp: lookup control-plane.minikube.internal on 192.169.0.1:53: no such host
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to INET_LOOKUP_HOST: join node to cluster: error joining control-plane node "m02" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p2agho.0x8r1lc9fc76guj5 --discovery-token-ca-cert-hash sha256:6426e8fe85c5406ab9e5959652619add0665631f5261a0cc2292efa254ba689b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-632000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: couldn't validate the identity of the API Server: failed to request the cluster-info ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp: lookup control-plane.minikube.internal on 192.169.0.1:53: no such host
	To see the stack trace of this error execute with --v=5 or higher
	
	W0916 03:40:41.568163    3372 out.go:270] * Suggestion: Verify that your HTTP_PROXY and HTTPS_PROXY environment variables are set correctly.
	* Suggestion: Verify that your HTTP_PROXY and HTTPS_PROXY environment variables are set correctly.
	W0916 03:40:41.568187    3372 out.go:270] * Documentation: https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/
	* Documentation: https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/
	I0916 03:40:41.589197    3372 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-632000 --control-plane -v=7 --alsologtostderr" : exit status 48
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (15.04627559s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:40:41.769797    3431 status.go:417] kubeconfig endpoint: empty host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (338.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (14.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-632000" in json of 'profile list' to include 4 nodes but have 2 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\"
,\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"Disabl
eMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-632000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-632000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-632000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-632000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":f
alse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-632000 -n ha-632000: exit status 6 (14.290603746s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:40:57.048269    3445 status.go:417] kubeconfig endpoint: empty host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-632000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (14.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-937000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-937000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.560112445s)

                                                
                                                
-- stdout --
	* [mount-start-1-937000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-937000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-937000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:90:d1:49:73:f0
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-937000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:bc:86:5b:70:ca
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for da:bc:86:5b:70:ca
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-937000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-937000 -n mount-start-1-937000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-937000 -n mount-start-1-937000: exit status 7 (79.659216ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 03:47:00.046463    3719 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 03:47:00.046486    3719 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-937000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.64s)

                                                
                                    
x
+
TestScheduledStopUnix (141.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-239000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-239000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.635880212s)

                                                
                                                
-- stdout --
	* [scheduled-stop-239000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-239000" primary control-plane node in "scheduled-stop-239000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-239000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:9b:e0:cf:b4:74
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-239000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:c2:5a:2d:5c:9
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:c2:5a:2d:5c:9
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-239000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-239000" primary control-plane node in "scheduled-stop-239000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-239000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:9b:e0:cf:b4:74
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-239000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:c2:5a:2d:5c:9
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:c2:5a:2d:5c:9
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-16 04:00:57.792233 -0700 PDT m=+2466.461927930
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-239000 -n scheduled-stop-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-239000 -n scheduled-stop-239000: exit status 7 (78.35621ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:00:57.868987    5164 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:00:57.869011    5164 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-239000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-239000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-239000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-239000: (5.234418732s)
--- FAIL: TestScheduledStopUnix (141.95s)

                                                
                                    
x
+
TestPause/serial/Start (139.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-571000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0916 04:39:21.647203    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-571000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m19.643439393s)

                                                
                                                
-- stdout --
	* [pause-571000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-571000" primary control-plane node in "pause-571000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-571000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:a1:88:fa:7d:dd
	* Failed to start hyperkit VM. Running "minikube delete -p pause-571000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:9d:bd:d1:b5:e7
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:9d:bd:d1:b5:e7
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-571000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-571000 -n pause-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-571000 -n pause-571000: exit status 7 (81.063313ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 04:41:37.475916    8016 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0916 04:41:37.475938    8016 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-571000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (139.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-505000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.1
E0916 05:00:26.724563    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:00:46.850056    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:00:54.439435    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-505000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.1: exit status 90 (1m16.542190546s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-505000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "default-k8s-diff-port-505000" primary control-plane node in "default-k8s-diff-port-505000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 04:59:58.428426   10604 out.go:345] Setting OutFile to fd 1 ...
	I0916 04:59:58.428597   10604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:59:58.428602   10604 out.go:358] Setting ErrFile to fd 2...
	I0916 04:59:58.428606   10604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 04:59:58.428781   10604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 04:59:58.430237   10604 out.go:352] Setting JSON to false
	I0916 04:59:58.452690   10604 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7168,"bootTime":1726480830,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 04:59:58.452848   10604 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 04:59:58.474994   10604 out.go:177] * [default-k8s-diff-port-505000] minikube v1.34.0 on Darwin 14.6.1
	I0916 04:59:58.519250   10604 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 04:59:58.519303   10604 notify.go:220] Checking for updates...
	I0916 04:59:58.562101   10604 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 04:59:58.583381   10604 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 04:59:58.604469   10604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 04:59:58.625380   10604 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:59:58.646246   10604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 04:59:58.668203   10604 config.go:182] Loaded profile config "embed-certs-877000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 04:59:58.668365   10604 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 04:59:58.698366   10604 out.go:177] * Using the hyperkit driver based on user configuration
	I0916 04:59:58.740388   10604 start.go:297] selected driver: hyperkit
	I0916 04:59:58.740415   10604 start.go:901] validating driver "hyperkit" against <nil>
	I0916 04:59:58.740434   10604 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 04:59:58.744586   10604 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:59:58.744706   10604 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 04:59:58.753211   10604 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 04:59:58.757137   10604 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:59:58.757157   10604 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 04:59:58.757199   10604 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 04:59:58.757428   10604 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 04:59:58.757463   10604 cni.go:84] Creating CNI manager for ""
	I0916 04:59:58.757507   10604 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 04:59:58.757512   10604 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 04:59:58.757586   10604 start.go:340] cluster config:
	{Name:default-k8s-diff-port-505000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-505000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 04:59:58.757675   10604 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 04:59:58.799313   10604 out.go:177] * Starting "default-k8s-diff-port-505000" primary control-plane node in "default-k8s-diff-port-505000" cluster
	I0916 04:59:58.824242   10604 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 04:59:58.824287   10604 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 04:59:58.824302   10604 cache.go:56] Caching tarball of preloaded images
	I0916 04:59:58.824413   10604 preload.go:172] Found /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 04:59:58.824423   10604 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 04:59:58.824518   10604 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/default-k8s-diff-port-505000/config.json ...
	I0916 04:59:58.824544   10604 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/default-k8s-diff-port-505000/config.json: {Name:mk274af23cd8419f04b344ef3dfeeaa013234dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 04:59:58.824853   10604 start.go:360] acquireMachinesLock for default-k8s-diff-port-505000: {Name:mk832c5c56518e77f79e3618682e80ad99a8b12a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 04:59:58.824918   10604 start.go:364] duration metric: took 52.005µs to acquireMachinesLock for "default-k8s-diff-port-505000"
	I0916 04:59:58.824956   10604 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-505000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-505000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 04:59:58.824995   10604 start.go:125] createHost starting for "" (driver="hyperkit")
	I0916 04:59:58.846806   10604 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 04:59:58.846961   10604 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 04:59:58.847006   10604 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 04:59:58.855749   10604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57136
	I0916 04:59:58.856121   10604 main.go:141] libmachine: () Calling .GetVersion
	I0916 04:59:58.856552   10604 main.go:141] libmachine: Using API Version  1
	I0916 04:59:58.856571   10604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 04:59:58.856809   10604 main.go:141] libmachine: () Calling .GetMachineName
	I0916 04:59:58.856928   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetMachineName
	I0916 04:59:58.857013   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 04:59:58.857113   10604 start.go:159] libmachine.API.Create for "default-k8s-diff-port-505000" (driver="hyperkit")
	I0916 04:59:58.857133   10604 client.go:168] LocalClient.Create starting
	I0916 04:59:58.857163   10604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem
	I0916 04:59:58.857217   10604 main.go:141] libmachine: Decoding PEM data...
	I0916 04:59:58.857235   10604 main.go:141] libmachine: Parsing certificate...
	I0916 04:59:58.857285   10604 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem
	I0916 04:59:58.857322   10604 main.go:141] libmachine: Decoding PEM data...
	I0916 04:59:58.857330   10604 main.go:141] libmachine: Parsing certificate...
	I0916 04:59:58.857349   10604 main.go:141] libmachine: Running pre-create checks...
	I0916 04:59:58.857360   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .PreCreateCheck
	I0916 04:59:58.857427   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:59:58.857580   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetConfigRaw
	I0916 04:59:58.883770   10604 main.go:141] libmachine: Creating machine...
	I0916 04:59:58.883781   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .Create
	I0916 04:59:58.883876   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:59:58.884055   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | I0916 04:59:58.883874   10612 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 04:59:58.884088   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Downloading /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso...
	I0916 04:59:59.077202   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | I0916 04:59:59.077137   10612 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/id_rsa...
	I0916 04:59:59.144096   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | I0916 04:59:59.144040   10612 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/default-k8s-diff-port-505000.rawdisk...
	I0916 04:59:59.144114   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Writing magic tar header
	I0916 04:59:59.144128   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Writing SSH key tar header
	I0916 04:59:59.144754   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | I0916 04:59:59.144709   10612 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000 ...
	I0916 04:59:59.586394   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:59:59.586415   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/hyperkit.pid
	I0916 04:59:59.586454   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Using UUID 182f605c-276a-4f8c-85e2-59ca341df722
	I0916 04:59:59.613918   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Generated MAC c2:c0:f2:8a:94:2d
	I0916 04:59:59.613951   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-505000
	I0916 04:59:59.613994   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"182f605c-276a-4f8c-85e2-59ca341df722", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d4240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:59:59.614029   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"182f605c-276a-4f8c-85e2-59ca341df722", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d4240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0916 04:59:59.614121   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "182f605c-276a-4f8c-85e2-59ca341df722", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/default-k8s-diff-port-505000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19651-1087/
.minikube/machines/default-k8s-diff-port-505000/bzimage,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-505000"}
	I0916 04:59:59.614201   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 182f605c-276a-4f8c-85e2-59ca341df722 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/default-k8s-diff-port-505000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/tty,log=/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/console-ring -f kexec,/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/bzimage,/Users
/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-505000"
	I0916 04:59:59.614243   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0916 04:59:59.617436   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 DEBUG: hyperkit: Pid is 10621
	I0916 04:59:59.617986   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Attempt 0
	I0916 04:59:59.618002   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 04:59:59.618071   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | hyperkit pid from json: 10621
	I0916 04:59:59.619681   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Searching for c2:c0:f2:8a:94:2d in /var/db/dhcpd_leases ...
	I0916 04:59:59.619838   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I0916 04:59:59.619863   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:22:95:db:0:dd:2 ID:1,22:95:db:0:dd:2 Lease:0x66e96f23}
	I0916 04:59:59.619906   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:86:b0:8a:69:f6:24 ID:1,86:b0:8a:69:f6:24 Lease:0x66e96d8e}
	I0916 04:59:59.619925   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:46:c5:8c:50:41:b6 ID:1,46:c5:8c:50:41:b6 Lease:0x66e96da1}
	I0916 04:59:59.619943   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:fe:14:f3:6b:2e:53 ID:1,fe:14:f3:6b:2e:53 Lease:0x66e96cbc}
	I0916 04:59:59.619959   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:5a:39:89:70:ab:19 ID:1,5a:39:89:70:ab:19 Lease:0x66e96c87}
	I0916 04:59:59.619972   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e2:89:cf:d0:eb:14 ID:1,e2:89:cf:d0:eb:14 Lease:0x66e96c6a}
	I0916 04:59:59.619988   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:5e:11:81:62:3e:74 ID:1,5e:11:81:62:3e:74 Lease:0x66e96c3a}
	I0916 04:59:59.620017   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:5a:ba:72:9c:96:ef ID:1,5a:ba:72:9c:96:ef Lease:0x66e96c19}
	I0916 04:59:59.620035   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ba:82:b7:de:3a:ed ID:1,ba:82:b7:de:3a:ed Lease:0x66e96bb7}
	I0916 04:59:59.620051   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:86:8:5e:fc:51:f2 ID:1,86:8:5e:fc:51:f2 Lease:0x66e96b77}
	I0916 04:59:59.620067   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:4a:12:4:81:e0:7e ID:1,4a:12:4:81:e0:7e Lease:0x66e96b61}
	I0916 04:59:59.620081   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2a:78:54:d0:86:8d ID:1,2a:78:54:d0:86:8d Lease:0x66e819d5}
	I0916 04:59:59.620111   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:4e:b2:df:76:71:cd ID:1,4e:b2:df:76:71:cd Lease:0x66e96b0d}
	I0916 04:59:59.620144   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b0:7d:29:dd:3e ID:1,2:b0:7d:29:dd:3e Lease:0x66e819a9}
	I0916 04:59:59.620164   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:16:b9:8d:b6:e0:be ID:1,16:b9:8d:b6:e0:be Lease:0x66e96aad}
	I0916 04:59:59.620179   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:82:c3:d6:78:85:74 ID:1,82:c3:d6:78:85:74 Lease:0x66e967d1}
	I0916 04:59:59.620189   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8a:70:bc:94:42:8b ID:1,8a:70:bc:94:42:8b Lease:0x66e964f2}
	I0916 04:59:59.620199   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 04:59:59.620212   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 04:59:59.620225   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 04:59:59.620238   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 04:59:59.620250   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 04:59:59.620262   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 04:59:59.620273   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 04:59:59.620292   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 04:59:59.620310   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 04:59:59.620329   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 04:59:59.620342   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 04:59:59.620354   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 04:59:59.620362   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 04:59:59.620369   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 04:59:59.620379   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 04:59:59.626699   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0916 04:59:59.636160   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0916 04:59:59.637205   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 04:59:59.637233   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 04:59:59.637246   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 04:59:59.637276   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 04:59:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 05:00:00.039198   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0916 05:00:00.039214   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0916 05:00:00.154056   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0916 05:00:00.154084   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0916 05:00:00.154102   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0916 05:00:00.154114   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0916 05:00:00.154906   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0916 05:00:00.154918   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0916 05:00:01.622224   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Attempt 1
	I0916 05:00:01.622262   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 05:00:01.622391   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | hyperkit pid from json: 10621
	I0916 05:00:01.623287   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Searching for c2:c0:f2:8a:94:2d in /var/db/dhcpd_leases ...
	I0916 05:00:01.623372   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I0916 05:00:01.623384   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:22:95:db:0:dd:2 ID:1,22:95:db:0:dd:2 Lease:0x66e96f23}
	I0916 05:00:01.623396   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:86:b0:8a:69:f6:24 ID:1,86:b0:8a:69:f6:24 Lease:0x66e96d8e}
	I0916 05:00:01.623406   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:46:c5:8c:50:41:b6 ID:1,46:c5:8c:50:41:b6 Lease:0x66e96da1}
	I0916 05:00:01.623431   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:fe:14:f3:6b:2e:53 ID:1,fe:14:f3:6b:2e:53 Lease:0x66e96cbc}
	I0916 05:00:01.623451   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:5a:39:89:70:ab:19 ID:1,5a:39:89:70:ab:19 Lease:0x66e96c87}
	I0916 05:00:01.623464   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e2:89:cf:d0:eb:14 ID:1,e2:89:cf:d0:eb:14 Lease:0x66e96c6a}
	I0916 05:00:01.623502   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:5e:11:81:62:3e:74 ID:1,5e:11:81:62:3e:74 Lease:0x66e96c3a}
	I0916 05:00:01.623521   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:5a:ba:72:9c:96:ef ID:1,5a:ba:72:9c:96:ef Lease:0x66e96c19}
	I0916 05:00:01.623529   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ba:82:b7:de:3a:ed ID:1,ba:82:b7:de:3a:ed Lease:0x66e96bb7}
	I0916 05:00:01.623537   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:86:8:5e:fc:51:f2 ID:1,86:8:5e:fc:51:f2 Lease:0x66e96b77}
	I0916 05:00:01.623545   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:4a:12:4:81:e0:7e ID:1,4a:12:4:81:e0:7e Lease:0x66e96b61}
	I0916 05:00:01.623551   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2a:78:54:d0:86:8d ID:1,2a:78:54:d0:86:8d Lease:0x66e819d5}
	I0916 05:00:01.623560   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:4e:b2:df:76:71:cd ID:1,4e:b2:df:76:71:cd Lease:0x66e96b0d}
	I0916 05:00:01.623568   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b0:7d:29:dd:3e ID:1,2:b0:7d:29:dd:3e Lease:0x66e819a9}
	I0916 05:00:01.623576   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:16:b9:8d:b6:e0:be ID:1,16:b9:8d:b6:e0:be Lease:0x66e96aad}
	I0916 05:00:01.623583   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:82:c3:d6:78:85:74 ID:1,82:c3:d6:78:85:74 Lease:0x66e967d1}
	I0916 05:00:01.623592   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8a:70:bc:94:42:8b ID:1,8a:70:bc:94:42:8b Lease:0x66e964f2}
	I0916 05:00:01.623600   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 05:00:01.623608   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 05:00:01.623616   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 05:00:01.623624   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 05:00:01.623631   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 05:00:01.623639   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 05:00:01.623646   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 05:00:01.623654   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 05:00:01.623684   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 05:00:01.623697   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 05:00:01.623704   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 05:00:01.623711   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 05:00:01.623718   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 05:00:01.623725   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 05:00:01.623732   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 05:00:03.626206   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Attempt 2
	I0916 05:00:03.626223   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 05:00:03.626301   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | hyperkit pid from json: 10621
	I0916 05:00:03.627172   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Searching for c2:c0:f2:8a:94:2d in /var/db/dhcpd_leases ...
	I0916 05:00:03.627228   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I0916 05:00:03.627259   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:22:95:db:0:dd:2 ID:1,22:95:db:0:dd:2 Lease:0x66e96f23}
	I0916 05:00:03.627270   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:86:b0:8a:69:f6:24 ID:1,86:b0:8a:69:f6:24 Lease:0x66e96d8e}
	I0916 05:00:03.627284   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:46:c5:8c:50:41:b6 ID:1,46:c5:8c:50:41:b6 Lease:0x66e96da1}
	I0916 05:00:03.627294   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:fe:14:f3:6b:2e:53 ID:1,fe:14:f3:6b:2e:53 Lease:0x66e96cbc}
	I0916 05:00:03.627304   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:5a:39:89:70:ab:19 ID:1,5a:39:89:70:ab:19 Lease:0x66e96c87}
	I0916 05:00:03.627321   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e2:89:cf:d0:eb:14 ID:1,e2:89:cf:d0:eb:14 Lease:0x66e96c6a}
	I0916 05:00:03.627328   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:5e:11:81:62:3e:74 ID:1,5e:11:81:62:3e:74 Lease:0x66e96c3a}
	I0916 05:00:03.627335   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:5a:ba:72:9c:96:ef ID:1,5a:ba:72:9c:96:ef Lease:0x66e96c19}
	I0916 05:00:03.627360   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ba:82:b7:de:3a:ed ID:1,ba:82:b7:de:3a:ed Lease:0x66e96bb7}
	I0916 05:00:03.627376   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:86:8:5e:fc:51:f2 ID:1,86:8:5e:fc:51:f2 Lease:0x66e96b77}
	I0916 05:00:03.627389   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:4a:12:4:81:e0:7e ID:1,4a:12:4:81:e0:7e Lease:0x66e96b61}
	I0916 05:00:03.627397   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2a:78:54:d0:86:8d ID:1,2a:78:54:d0:86:8d Lease:0x66e819d5}
	I0916 05:00:03.627414   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:4e:b2:df:76:71:cd ID:1,4e:b2:df:76:71:cd Lease:0x66e96b0d}
	I0916 05:00:03.627427   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b0:7d:29:dd:3e ID:1,2:b0:7d:29:dd:3e Lease:0x66e819a9}
	I0916 05:00:03.627435   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:16:b9:8d:b6:e0:be ID:1,16:b9:8d:b6:e0:be Lease:0x66e96aad}
	I0916 05:00:03.627443   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:82:c3:d6:78:85:74 ID:1,82:c3:d6:78:85:74 Lease:0x66e967d1}
	I0916 05:00:03.627457   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8a:70:bc:94:42:8b ID:1,8a:70:bc:94:42:8b Lease:0x66e964f2}
	I0916 05:00:03.627472   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 05:00:03.627486   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 05:00:03.627504   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 05:00:03.627520   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 05:00:03.627533   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 05:00:03.627541   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 05:00:03.627548   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 05:00:03.627554   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 05:00:03.627562   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 05:00:03.627588   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 05:00:03.627597   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 05:00:03.627606   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 05:00:03.627611   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 05:00:03.627620   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 05:00:03.627629   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 05:00:05.629207   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Attempt 3
	I0916 05:00:05.629220   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 05:00:05.629302   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | hyperkit pid from json: 10621
	I0916 05:00:05.630115   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Searching for c2:c0:f2:8a:94:2d in /var/db/dhcpd_leases ...
	I0916 05:00:05.630187   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I0916 05:00:05.630198   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:22:95:db:0:dd:2 ID:1,22:95:db:0:dd:2 Lease:0x66e96f23}
	I0916 05:00:05.630216   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:86:b0:8a:69:f6:24 ID:1,86:b0:8a:69:f6:24 Lease:0x66e96d8e}
	I0916 05:00:05.630225   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:46:c5:8c:50:41:b6 ID:1,46:c5:8c:50:41:b6 Lease:0x66e96da1}
	I0916 05:00:05.630248   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:fe:14:f3:6b:2e:53 ID:1,fe:14:f3:6b:2e:53 Lease:0x66e96cbc}
	I0916 05:00:05.630260   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:5a:39:89:70:ab:19 ID:1,5a:39:89:70:ab:19 Lease:0x66e96c87}
	I0916 05:00:05.630270   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e2:89:cf:d0:eb:14 ID:1,e2:89:cf:d0:eb:14 Lease:0x66e96c6a}
	I0916 05:00:05.630279   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:5e:11:81:62:3e:74 ID:1,5e:11:81:62:3e:74 Lease:0x66e96c3a}
	I0916 05:00:05.630294   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:5a:ba:72:9c:96:ef ID:1,5a:ba:72:9c:96:ef Lease:0x66e96c19}
	I0916 05:00:05.630306   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ba:82:b7:de:3a:ed ID:1,ba:82:b7:de:3a:ed Lease:0x66e96bb7}
	I0916 05:00:05.630323   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:86:8:5e:fc:51:f2 ID:1,86:8:5e:fc:51:f2 Lease:0x66e96b77}
	I0916 05:00:05.630331   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:4a:12:4:81:e0:7e ID:1,4a:12:4:81:e0:7e Lease:0x66e96b61}
	I0916 05:00:05.630338   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2a:78:54:d0:86:8d ID:1,2a:78:54:d0:86:8d Lease:0x66e819d5}
	I0916 05:00:05.630346   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:4e:b2:df:76:71:cd ID:1,4e:b2:df:76:71:cd Lease:0x66e96b0d}
	I0916 05:00:05.630353   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b0:7d:29:dd:3e ID:1,2:b0:7d:29:dd:3e Lease:0x66e819a9}
	I0916 05:00:05.630361   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:16:b9:8d:b6:e0:be ID:1,16:b9:8d:b6:e0:be Lease:0x66e96aad}
	I0916 05:00:05.630368   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:82:c3:d6:78:85:74 ID:1,82:c3:d6:78:85:74 Lease:0x66e967d1}
	I0916 05:00:05.630376   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8a:70:bc:94:42:8b ID:1,8a:70:bc:94:42:8b Lease:0x66e964f2}
	I0916 05:00:05.630383   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 05:00:05.630391   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 05:00:05.630407   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 05:00:05.630418   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 05:00:05.630426   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 05:00:05.630434   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 05:00:05.630441   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 05:00:05.630447   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 05:00:05.630453   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 05:00:05.630460   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 05:00:05.630466   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 05:00:05.630472   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 05:00:05.630479   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 05:00:05.630487   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 05:00:05.630496   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 05:00:05.861237   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0916 05:00:05.861350   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0916 05:00:05.861372   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0916 05:00:05.886206   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | 2024/09/16 05:00:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0916 05:00:07.632960   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Attempt 4
	I0916 05:00:07.632976   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 05:00:07.633067   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | hyperkit pid from json: 10621
	I0916 05:00:07.633892   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Searching for c2:c0:f2:8a:94:2d in /var/db/dhcpd_leases ...
	I0916 05:00:07.633959   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I0916 05:00:07.633966   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:22:95:db:0:dd:2 ID:1,22:95:db:0:dd:2 Lease:0x66e96f23}
	I0916 05:00:07.633974   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:86:b0:8a:69:f6:24 ID:1,86:b0:8a:69:f6:24 Lease:0x66e96d8e}
	I0916 05:00:07.633980   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:46:c5:8c:50:41:b6 ID:1,46:c5:8c:50:41:b6 Lease:0x66e96da1}
	I0916 05:00:07.633986   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:fe:14:f3:6b:2e:53 ID:1,fe:14:f3:6b:2e:53 Lease:0x66e96cbc}
	I0916 05:00:07.633992   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:5a:39:89:70:ab:19 ID:1,5a:39:89:70:ab:19 Lease:0x66e96c87}
	I0916 05:00:07.633998   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:e2:89:cf:d0:eb:14 ID:1,e2:89:cf:d0:eb:14 Lease:0x66e96c6a}
	I0916 05:00:07.634006   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:5e:11:81:62:3e:74 ID:1,5e:11:81:62:3e:74 Lease:0x66e96c3a}
	I0916 05:00:07.634013   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:5a:ba:72:9c:96:ef ID:1,5a:ba:72:9c:96:ef Lease:0x66e96c19}
	I0916 05:00:07.634019   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:ba:82:b7:de:3a:ed ID:1,ba:82:b7:de:3a:ed Lease:0x66e96bb7}
	I0916 05:00:07.634036   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:86:8:5e:fc:51:f2 ID:1,86:8:5e:fc:51:f2 Lease:0x66e96b77}
	I0916 05:00:07.634050   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:4a:12:4:81:e0:7e ID:1,4a:12:4:81:e0:7e Lease:0x66e96b61}
	I0916 05:00:07.634060   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:2a:78:54:d0:86:8d ID:1,2a:78:54:d0:86:8d Lease:0x66e819d5}
	I0916 05:00:07.634070   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:4e:b2:df:76:71:cd ID:1,4e:b2:df:76:71:cd Lease:0x66e96b0d}
	I0916 05:00:07.634080   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b0:7d:29:dd:3e ID:1,2:b0:7d:29:dd:3e Lease:0x66e819a9}
	I0916 05:00:07.634088   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:16:b9:8d:b6:e0:be ID:1,16:b9:8d:b6:e0:be Lease:0x66e96aad}
	I0916 05:00:07.634095   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:82:c3:d6:78:85:74 ID:1,82:c3:d6:78:85:74 Lease:0x66e967d1}
	I0916 05:00:07.634103   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8a:70:bc:94:42:8b ID:1,8a:70:bc:94:42:8b Lease:0x66e964f2}
	I0916 05:00:07.634117   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:ca:ce:7f:3b:e5:57 ID:1,ca:ce:7f:3b:e5:57 Lease:0x66e9617c}
	I0916 05:00:07.634129   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ca:75:7f:e5:69:21 ID:1,ca:75:7f:e5:69:21 Lease:0x66e960bd}
	I0916 05:00:07.634138   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:cb:62:3e:53:da ID:1,8e:cb:62:3e:53:da Lease:0x66e80ecc}
	I0916 05:00:07.634147   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:86:1c:96:6:52:7b ID:1,86:1c:96:6:52:7b Lease:0x66e80e3c}
	I0916 05:00:07.634157   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5d:45:3a:19:7f ID:1,1e:5d:45:3a:19:7f Lease:0x66e96005}
	I0916 05:00:07.634164   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:62:42:c8:75:17:ae ID:1,62:42:c8:75:17:ae Lease:0x66e95fd8}
	I0916 05:00:07.634195   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:92:34:23:83:45:dc ID:1,92:34:23:83:45:dc Lease:0x66e95d75}
	I0916 05:00:07.634212   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ae:4a:cc:ee:65:d6 ID:1,ae:4a:cc:ee:65:d6 Lease:0x66e95d51}
	I0916 05:00:07.634221   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:4a:a2:16:da:d9:1f ID:1,4a:a2:16:da:d9:1f Lease:0x66e95d13}
	I0916 05:00:07.634230   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:8a:86:14:a8:21:97 ID:1,8a:86:14:a8:21:97 Lease:0x66e80b87}
	I0916 05:00:07.634236   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:c6:8c:d2:f3:2d:a4 ID:1,c6:8c:d2:f3:2d:a4 Lease:0x66e80b5c}
	I0916 05:00:07.634243   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:1a:88:5b:ab:d5:de ID:1,1a:88:5b:ab:d5:de Lease:0x66e95b2c}
	I0916 05:00:07.634255   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:72:b0:31:a7:54:f1 ID:1,72:b0:31:a7:54:f1 Lease:0x66e95903}
	I0916 05:00:07.634267   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2a:ad:c7:fb:c8:e6 ID:1,2a:ad:c7:fb:c8:e6 Lease:0x66e9583d}
	I0916 05:00:07.634276   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:c2:a3:9:c2:ab:37 ID:1,c2:a3:9:c2:ab:37 Lease:0x66e957eb}
	I0916 05:00:09.636054   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Attempt 5
	I0916 05:00:09.636081   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 05:00:09.636210   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | hyperkit pid from json: 10621
	I0916 05:00:09.637736   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Searching for c2:c0:f2:8a:94:2d in /var/db/dhcpd_leases ...
	I0916 05:00:09.637925   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Found 33 entries in /var/db/dhcpd_leases!
	I0916 05:00:09.637945   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:c2:c0:f2:8a:94:2d ID:1,c2:c0:f2:8a:94:2d Lease:0x66e96f48}
	I0916 05:00:09.637973   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | Found match: c2:c0:f2:8a:94:2d
	I0916 05:00:09.637978   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | IP: 192.169.0.34
	I0916 05:00:09.638011   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetConfigRaw
	I0916 05:00:09.638795   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:09.638967   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:09.639120   10604 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 05:00:09.639137   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetState
	I0916 05:00:09.639253   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 05:00:09.639336   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) DBG | hyperkit pid from json: 10621
	I0916 05:00:09.640388   10604 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 05:00:09.640404   10604 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 05:00:09.640409   10604 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 05:00:09.640416   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:09.640535   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:09.640628   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:09.640718   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:09.640803   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:09.640914   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:09.641100   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:09.641107   10604 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 05:00:10.706085   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 05:00:10.706100   10604 main.go:141] libmachine: Detecting the provisioner...
	I0916 05:00:10.706107   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:10.706236   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:10.706341   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:10.706428   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:10.706524   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:10.706662   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:10.706800   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:10.706808   10604 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 05:00:10.770074   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 05:00:10.770160   10604 main.go:141] libmachine: found compatible host: buildroot
	I0916 05:00:10.770167   10604 main.go:141] libmachine: Provisioning with buildroot...
	I0916 05:00:10.770173   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetMachineName
	I0916 05:00:10.770341   10604 buildroot.go:166] provisioning hostname "default-k8s-diff-port-505000"
	I0916 05:00:10.770355   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetMachineName
	I0916 05:00:10.770452   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:10.770540   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:10.770621   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:10.770720   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:10.770803   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:10.770928   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:10.771068   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:10.771077   10604 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-505000 && echo "default-k8s-diff-port-505000" | sudo tee /etc/hostname
	I0916 05:00:10.843492   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-505000
	
	I0916 05:00:10.843512   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:10.843651   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:10.843751   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:10.843846   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:10.843931   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:10.844071   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:10.844223   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:10.844235   10604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-505000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-505000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-505000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 05:00:10.918000   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 05:00:10.918019   10604 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19651-1087/.minikube CaCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19651-1087/.minikube}
	I0916 05:00:10.918043   10604 buildroot.go:174] setting up certificates
	I0916 05:00:10.918051   10604 provision.go:84] configureAuth start
	I0916 05:00:10.918065   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetMachineName
	I0916 05:00:10.918210   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetIP
	I0916 05:00:10.918310   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:10.918394   10604 provision.go:143] copyHostCerts
	I0916 05:00:10.918487   10604 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem, removing ...
	I0916 05:00:10.918495   10604 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem
	I0916 05:00:10.918692   10604 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/ca.pem (1082 bytes)
	I0916 05:00:10.918926   10604 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem, removing ...
	I0916 05:00:10.918932   10604 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem
	I0916 05:00:10.919152   10604 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/cert.pem (1123 bytes)
	I0916 05:00:10.919335   10604 exec_runner.go:144] found /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem, removing ...
	I0916 05:00:10.919341   10604 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem
	I0916 05:00:10.919415   10604 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19651-1087/.minikube/key.pem (1675 bytes)
	I0916 05:00:10.919565   10604 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-505000 san=[127.0.0.1 192.169.0.34 default-k8s-diff-port-505000 localhost minikube]
	I0916 05:00:11.067409   10604 provision.go:177] copyRemoteCerts
	I0916 05:00:11.067478   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 05:00:11.067495   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:11.067638   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:11.067722   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:11.067803   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:11.067894   10604 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/id_rsa Username:docker}
	I0916 05:00:11.106145   10604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 05:00:11.125928   10604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 05:00:11.145795   10604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 05:00:11.165219   10604 provision.go:87] duration metric: took 247.042358ms to configureAuth
	I0916 05:00:11.165234   10604 buildroot.go:189] setting minikube options for container-runtime
	I0916 05:00:11.165381   10604 config.go:182] Loaded profile config "default-k8s-diff-port-505000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 05:00:11.165396   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:11.165533   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:11.165629   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:11.165716   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:11.165798   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:11.165885   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:11.166009   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:11.166135   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:11.166143   10604 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 05:00:11.228407   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 05:00:11.228432   10604 buildroot.go:70] root file system type: tmpfs
	I0916 05:00:11.228508   10604 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 05:00:11.228521   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:11.228654   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:11.228740   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:11.228847   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:11.228939   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:11.229080   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:11.229226   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:11.229270   10604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 05:00:11.303377   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 05:00:11.303403   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:11.303550   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:11.303642   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:11.303725   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:11.303807   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:11.303955   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:11.304094   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:11.304106   10604 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 05:00:12.864114   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 05:00:12.864131   10604 main.go:141] libmachine: Checking connection to Docker...
	I0916 05:00:12.864138   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetURL
	I0916 05:00:12.864268   10604 main.go:141] libmachine: Docker is up and running!
	I0916 05:00:12.864276   10604 main.go:141] libmachine: Reticulating splines...
	I0916 05:00:12.864286   10604 client.go:171] duration metric: took 13.998201018s to LocalClient.Create
	I0916 05:00:12.864305   10604 start.go:167] duration metric: took 13.998245187s to libmachine.API.Create "default-k8s-diff-port-505000"
	I0916 05:00:12.864316   10604 start.go:293] postStartSetup for "default-k8s-diff-port-505000" (driver="hyperkit")
	I0916 05:00:12.864324   10604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 05:00:12.864334   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:12.864479   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 05:00:12.864493   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:12.864583   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:12.864681   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:12.864765   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:12.864847   10604 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/id_rsa Username:docker}
	I0916 05:00:12.905624   10604 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 05:00:12.909316   10604 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 05:00:12.909335   10604 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/addons for local assets ...
	I0916 05:00:12.909438   10604 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19651-1087/.minikube/files for local assets ...
	I0916 05:00:12.909619   10604 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem -> 16492.pem in /etc/ssl/certs
	I0916 05:00:12.909833   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 05:00:12.918395   10604 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/ssl/certs/16492.pem --> /etc/ssl/certs/16492.pem (1708 bytes)
	I0916 05:00:12.948546   10604 start.go:296] duration metric: took 84.18766ms for postStartSetup
	I0916 05:00:12.948576   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetConfigRaw
	I0916 05:00:12.949197   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetIP
	I0916 05:00:12.949341   10604 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/default-k8s-diff-port-505000/config.json ...
	I0916 05:00:12.949717   10604 start.go:128] duration metric: took 14.115705207s to createHost
	I0916 05:00:12.949730   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:12.949839   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:12.949919   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:12.950002   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:12.950082   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:12.950193   10604 main.go:141] libmachine: Using SSH client type: native
	I0916 05:00:12.950314   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3d5a820] 0x3d5d500 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0916 05:00:12.950321   10604 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 05:00:13.013663   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726488012.939694299
	
	I0916 05:00:13.013674   10604 fix.go:216] guest clock: 1726488012.939694299
	I0916 05:00:13.013680   10604 fix.go:229] Guest: 2024-09-16 05:00:12.939694299 -0700 PDT Remote: 2024-09-16 05:00:12.949725 -0700 PDT m=+14.548350826 (delta=-10.030701ms)
	I0916 05:00:13.013700   10604 fix.go:200] guest clock delta is within tolerance: -10.030701ms
	I0916 05:00:13.013703   10604 start.go:83] releasing machines lock for "default-k8s-diff-port-505000", held for 14.179747228s
	I0916 05:00:13.013721   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:13.013856   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetIP
	I0916 05:00:13.013950   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:13.014293   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:13.014418   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .DriverName
	I0916 05:00:13.014504   10604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 05:00:13.014537   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:13.014572   10604 ssh_runner.go:195] Run: cat /version.json
	I0916 05:00:13.014597   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHHostname
	I0916 05:00:13.014627   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:13.014693   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHPort
	I0916 05:00:13.014721   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:13.014775   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHKeyPath
	I0916 05:00:13.014804   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:13.014876   10604 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/id_rsa Username:docker}
	I0916 05:00:13.014893   10604 main.go:141] libmachine: (default-k8s-diff-port-505000) Calling .GetSSHUsername
	I0916 05:00:13.014970   10604 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/default-k8s-diff-port-505000/id_rsa Username:docker}
	I0916 05:00:13.048415   10604 ssh_runner.go:195] Run: systemctl --version
	I0916 05:00:13.094438   10604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 05:00:13.099023   10604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 05:00:13.099081   10604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 05:00:13.113122   10604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 05:00:13.113138   10604 start.go:495] detecting cgroup driver to use...
	I0916 05:00:13.113254   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 05:00:13.127997   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 05:00:13.137275   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 05:00:13.146392   10604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 05:00:13.146449   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 05:00:13.155428   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 05:00:13.164341   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 05:00:13.173473   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 05:00:13.182359   10604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 05:00:13.191570   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 05:00:13.200512   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 05:00:13.209420   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 05:00:13.218354   10604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 05:00:13.226434   10604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 05:00:13.234472   10604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 05:00:13.333462   10604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 05:00:13.351368   10604 start.go:495] detecting cgroup driver to use...
	I0916 05:00:13.351458   10604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 05:00:13.373683   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 05:00:13.384354   10604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 05:00:13.404982   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 05:00:13.416979   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 05:00:13.428063   10604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 05:00:13.488031   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 05:00:13.499311   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 05:00:13.514076   10604 ssh_runner.go:195] Run: which cri-dockerd
	I0916 05:00:13.517033   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 05:00:13.525402   10604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 05:00:13.539168   10604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 05:00:13.634401   10604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 05:00:13.740155   10604 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 05:00:13.740225   10604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 05:00:13.755555   10604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 05:00:13.863705   10604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 05:01:14.767987   10604 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.898207883s)
	I0916 05:01:14.768060   10604 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0916 05:01:14.802832   10604 out.go:201] 
	W0916 05:01:14.824264   10604 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 16 12:00:11 default-k8s-diff-port-505000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:11.557022533Z" level=info msg="Starting up"
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:11.558179584Z" level=info msg="containerd not running, starting managed containerd"
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:11.558871243Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.573227542Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589398328Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589535087Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589610120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589646242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589727467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589769349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589956533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590000278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590032565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590061941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590138957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590322071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.591853445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.591910935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592096016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592140802Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592236867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592309206Z" level=info msg="metadata content store policy set" policy=shared
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596113385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596203770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596250813Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596288360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596320901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596420159Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596631843Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596735514Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596773326Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596824913Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596891784Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596924649Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596954901Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596993021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597025762Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597055926Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597085655Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597114984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597152151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597186313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597223694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597262259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597292984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597323421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597353032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597388132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597422554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597464382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597498182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597529220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597558858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597590068Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597626164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597657869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597687972Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597762652Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597807279Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597887130Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597925815Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597955702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597985245Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598013562Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598190085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598276217Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598338528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598378420Z" level=info msg="containerd successfully booted in 0.025749s"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.583414246Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.589709493Z" level=info msg="Loading containers: start."
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.671656087Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.750645161Z" level=info msg="Loading containers: done."
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.760954159Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.761109225Z" level=info msg="Daemon has completed initialization"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.787125498Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.787207577Z" level=info msg="API listen on [::]:2376"
	Sep 16 12:00:12 default-k8s-diff-port-505000 systemd[1]: Started Docker Application Container Engine.
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.801819458Z" level=info msg="Processing signal 'terminated'"
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802601940Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802778884Z" level=info msg="Daemon shutdown complete"
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802885005Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802894840Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 16 12:00:13 default-k8s-diff-port-505000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 16 12:00:14 default-k8s-diff-port-505000 systemd[1]: docker.service: Deactivated successfully.
	Sep 16 12:00:14 default-k8s-diff-port-505000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 16 12:00:14 default-k8s-diff-port-505000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 12:00:14 default-k8s-diff-port-505000 dockerd[910]: time="2024-09-16T12:00:14.842657841Z" level=info msg="Starting up"
	Sep 16 12:01:14 default-k8s-diff-port-505000 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 16 12:01:14 default-k8s-diff-port-505000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 16 12:01:14 default-k8s-diff-port-505000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 16 12:01:14 default-k8s-diff-port-505000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 16 12:00:11 default-k8s-diff-port-505000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:11.557022533Z" level=info msg="Starting up"
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:11.558179584Z" level=info msg="containerd not running, starting managed containerd"
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:11.558871243Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.573227542Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589398328Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589535087Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589610120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589646242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589727467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589769349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.589956533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590000278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590032565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590061941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590138957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.590322071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.591853445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.591910935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592096016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592140802Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592236867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.592309206Z" level=info msg="metadata content store policy set" policy=shared
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596113385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596203770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596250813Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596288360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596320901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596420159Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596631843Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596735514Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596773326Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596824913Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596891784Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596924649Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596954901Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.596993021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597025762Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597055926Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597085655Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597114984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597152151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597186313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597223694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597262259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597292984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597323421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597353032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597388132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597422554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597464382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597498182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597529220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597558858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597590068Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597626164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597657869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597687972Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597762652Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597807279Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597887130Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597925815Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597955702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.597985245Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598013562Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598190085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598276217Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598338528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 16 12:00:11 default-k8s-diff-port-505000 dockerd[514]: time="2024-09-16T12:00:11.598378420Z" level=info msg="containerd successfully booted in 0.025749s"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.583414246Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.589709493Z" level=info msg="Loading containers: start."
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.671656087Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.750645161Z" level=info msg="Loading containers: done."
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.760954159Z" level=info msg="Docker daemon" commit=8b539b8 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.761109225Z" level=info msg="Daemon has completed initialization"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.787125498Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 16 12:00:12 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:12.787207577Z" level=info msg="API listen on [::]:2376"
	Sep 16 12:00:12 default-k8s-diff-port-505000 systemd[1]: Started Docker Application Container Engine.
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.801819458Z" level=info msg="Processing signal 'terminated'"
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802601940Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802778884Z" level=info msg="Daemon shutdown complete"
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802885005Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 16 12:00:13 default-k8s-diff-port-505000 dockerd[507]: time="2024-09-16T12:00:13.802894840Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 16 12:00:13 default-k8s-diff-port-505000 systemd[1]: Stopping Docker Application Container Engine...
	Sep 16 12:00:14 default-k8s-diff-port-505000 systemd[1]: docker.service: Deactivated successfully.
	Sep 16 12:00:14 default-k8s-diff-port-505000 systemd[1]: Stopped Docker Application Container Engine.
	Sep 16 12:00:14 default-k8s-diff-port-505000 systemd[1]: Starting Docker Application Container Engine...
	Sep 16 12:00:14 default-k8s-diff-port-505000 dockerd[910]: time="2024-09-16T12:00:14.842657841Z" level=info msg="Starting up"
	Sep 16 12:01:14 default-k8s-diff-port-505000 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 16 12:01:14 default-k8s-diff-port-505000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 16 12:01:14 default-k8s-diff-port-505000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 16 12:01:14 default-k8s-diff-port-505000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0916 05:01:14.824340   10604 out.go:270] * 
	* 
	W0916 05:01:14.825659   10604 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 05:01:14.887451   10604 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-505000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.1": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000: exit status 6 (155.702694ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 05:01:15.094732   10937 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-505000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-505000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-505000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-505000 create -f testdata/busybox.yaml: exit status 1 (38.294699ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-505000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-505000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000: exit status 6 (150.267193ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 05:01:15.284622   10943 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-505000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-505000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000: exit status 6 (152.746284ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 05:01:15.437213   10948 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-505000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-505000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (59.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-505000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0916 05:01:28.159144    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-505000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (59.731056789s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-505000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-505000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-505000 describe deploy/metrics-server -n kube-system: exit status 1 (37.812662ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-505000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-505000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000: exit status 6 (155.936515ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 05:02:15.364411   10967 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-505000" does not appear in /Users/jenkins/minikube-integration/19651-1087/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-505000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (59.93s)

                                                
                                    

Test pass (262/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.29
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.31
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.1/json-events 8.36
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.29
18 TestDownloadOnly/v1.31.1/DeleteAll 0.23
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.23
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
35 TestHyperKitDriverInstallOrUpdate 8.05
38 TestErrorSpam/setup 35.91
39 TestErrorSpam/start 1.74
40 TestErrorSpam/status 0.52
41 TestErrorSpam/pause 1.35
42 TestErrorSpam/unpause 1.43
43 TestErrorSpam/stop 155.87
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 80.92
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 42.6
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.07
54 TestFunctional/serial/CacheCmd/cache/add_remote 2.94
55 TestFunctional/serial/CacheCmd/cache/add_local 1.35
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
57 TestFunctional/serial/CacheCmd/cache/list 0.08
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 1.2
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.58
63 TestFunctional/serial/ExtraConfig 39.58
64 TestFunctional/serial/ComponentHealth 0.05
65 TestFunctional/serial/LogsCmd 2.63
66 TestFunctional/serial/LogsFileCmd 2.66
67 TestFunctional/serial/InvalidService 3.7
69 TestFunctional/parallel/ConfigCmd 0.51
70 TestFunctional/parallel/DashboardCmd 23.97
71 TestFunctional/parallel/DryRun 1.1
72 TestFunctional/parallel/InternationalLanguage 0.65
73 TestFunctional/parallel/StatusCmd 0.54
77 TestFunctional/parallel/ServiceCmdConnect 12.38
78 TestFunctional/parallel/AddonsCmd 0.22
79 TestFunctional/parallel/PersistentVolumeClaim 27.51
81 TestFunctional/parallel/SSHCmd 0.31
82 TestFunctional/parallel/CpCmd 0.94
83 TestFunctional/parallel/MySQL 23.87
84 TestFunctional/parallel/FileSync 0.16
85 TestFunctional/parallel/CertSync 0.92
89 TestFunctional/parallel/NodeLabels 0.05
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
93 TestFunctional/parallel/License 0.59
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.13
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
105 TestFunctional/parallel/ServiceCmd/DeployApp 7.11
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
107 TestFunctional/parallel/ProfileCmd/profile_list 0.26
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
109 TestFunctional/parallel/MountCmd/any-port 7.2
110 TestFunctional/parallel/ServiceCmd/List 0.38
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
113 TestFunctional/parallel/ServiceCmd/Format 0.28
114 TestFunctional/parallel/ServiceCmd/URL 0.24
115 TestFunctional/parallel/Version/short 0.1
116 TestFunctional/parallel/Version/components 0.33
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.15
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.5
122 TestFunctional/parallel/ImageCommands/Setup 2
123 TestFunctional/parallel/MountCmd/specific-port 1.51
124 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.65
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.59
127 TestFunctional/parallel/DockerEnv/bash 0.56
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.48
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
136 TestFunctional/delete_echo-server_images 0.04
137 TestFunctional/delete_my-image_image 0.02
138 TestFunctional/delete_minikube_cached_images 0.02
153 TestMultiControlPlane/serial/RestartClusterKeepsNodes 69.35
163 TestImageBuild/serial/Setup 36.42
164 TestImageBuild/serial/NormalBuild 1.43
165 TestImageBuild/serial/BuildWithBuildArg 0.68
166 TestImageBuild/serial/BuildWithDockerIgnore 0.59
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.67
171 TestJSONOutput/start/Command 50.7
172 TestJSONOutput/start/Audit 0
174 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/pause/Command 0.46
178 TestJSONOutput/pause/Audit 0
180 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/unpause/Command 0.44
184 TestJSONOutput/unpause/Audit 0
186 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/stop/Command 8.33
190 TestJSONOutput/stop/Audit 0
192 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
194 TestErrorJSONOutput 0.58
199 TestMainNoArgs 0.08
200 TestMinikubeProfile 85.48
206 TestMultiNode/serial/FreshStart2Nodes 106.81
207 TestMultiNode/serial/DeployApp2Nodes 5.71
208 TestMultiNode/serial/PingHostFrom2Pods 0.9
209 TestMultiNode/serial/AddNode 46.05
210 TestMultiNode/serial/MultiNodeLabels 0.05
211 TestMultiNode/serial/ProfileList 0.18
212 TestMultiNode/serial/CopyFile 5.33
213 TestMultiNode/serial/StopNode 2.89
214 TestMultiNode/serial/StartAfterStop 41.65
215 TestMultiNode/serial/RestartKeepsNodes 191.78
216 TestMultiNode/serial/DeleteNode 3.27
217 TestMultiNode/serial/StopMultiNode 16.8
218 TestMultiNode/serial/RestartMultiNode 85.3
219 TestMultiNode/serial/ValidateNameConflict 41.41
223 TestPreload 136.91
226 TestSkaffold 112.91
229 TestRunningBinaryUpgrade 90.97
231 TestKubernetesUpgrade 1328.27
244 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.13
245 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.5
246 TestStoppedBinaryUpgrade/Setup 0.97
247 TestStoppedBinaryUpgrade/Upgrade 119.76
250 TestStoppedBinaryUpgrade/MinikubeLogs 2.48
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.51
260 TestNoKubernetes/serial/StartWithK8s 74.43
261 TestNetworkPlugins/group/auto/Start 88.18
262 TestNoKubernetes/serial/StartWithStopK8s 17.81
263 TestNoKubernetes/serial/Start 18.87
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
265 TestNoKubernetes/serial/ProfileList 0.46
266 TestNoKubernetes/serial/Stop 2.38
267 TestNoKubernetes/serial/StartNoArgs 19.52
268 TestNetworkPlugins/group/auto/KubeletFlags 0.15
269 TestNetworkPlugins/group/auto/NetCatPod 12.15
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
271 TestNetworkPlugins/group/flannel/Start 50.99
272 TestNetworkPlugins/group/auto/DNS 0.12
273 TestNetworkPlugins/group/auto/Localhost 0.11
274 TestNetworkPlugins/group/auto/HairPin 0.1
275 TestNetworkPlugins/group/enable-default-cni/Start 167.32
276 TestNetworkPlugins/group/flannel/ControllerPod 6
277 TestNetworkPlugins/group/flannel/KubeletFlags 0.15
278 TestNetworkPlugins/group/flannel/NetCatPod 11.14
279 TestNetworkPlugins/group/flannel/DNS 0.14
280 TestNetworkPlugins/group/flannel/Localhost 0.1
281 TestNetworkPlugins/group/flannel/HairPin 0.1
282 TestNetworkPlugins/group/kindnet/Start 61.68
283 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
284 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
285 TestNetworkPlugins/group/kindnet/NetCatPod 12.16
286 TestNetworkPlugins/group/kindnet/DNS 0.13
287 TestNetworkPlugins/group/kindnet/Localhost 0.11
288 TestNetworkPlugins/group/kindnet/HairPin 0.1
289 TestNetworkPlugins/group/bridge/Start 52.27
290 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.15
291 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.15
292 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
293 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
294 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
295 TestNetworkPlugins/group/kubenet/Start 49.07
296 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
297 TestNetworkPlugins/group/bridge/NetCatPod 11.14
298 TestNetworkPlugins/group/bridge/DNS 0.12
299 TestNetworkPlugins/group/bridge/Localhost 0.1
300 TestNetworkPlugins/group/bridge/HairPin 0.1
301 TestNetworkPlugins/group/custom-flannel/Start 51.83
302 TestNetworkPlugins/group/kubenet/KubeletFlags 0.17
303 TestNetworkPlugins/group/kubenet/NetCatPod 10.15
304 TestNetworkPlugins/group/kubenet/DNS 0.13
305 TestNetworkPlugins/group/kubenet/Localhost 0.14
306 TestNetworkPlugins/group/kubenet/HairPin 0.1
307 TestNetworkPlugins/group/calico/Start 64.69
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.2
310 TestNetworkPlugins/group/custom-flannel/DNS 0.13
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
313 TestNetworkPlugins/group/false/Start 81.32
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.16
316 TestNetworkPlugins/group/calico/NetCatPod 11.13
317 TestNetworkPlugins/group/calico/DNS 0.13
318 TestNetworkPlugins/group/calico/Localhost 0.1
319 TestNetworkPlugins/group/calico/HairPin 0.1
321 TestStartStop/group/old-k8s-version/serial/FirstStart 163.96
322 TestNetworkPlugins/group/false/KubeletFlags 0.16
323 TestNetworkPlugins/group/false/NetCatPod 11.13
324 TestNetworkPlugins/group/false/DNS 0.13
325 TestNetworkPlugins/group/false/Localhost 0.1
326 TestNetworkPlugins/group/false/HairPin 0.12
328 TestStartStop/group/no-preload/serial/FirstStart 82.45
329 TestStartStop/group/no-preload/serial/DeployApp 9.24
330 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.74
331 TestStartStop/group/no-preload/serial/Stop 8.39
332 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
333 TestStartStop/group/no-preload/serial/SecondStart 289.89
334 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
335 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
336 TestStartStop/group/old-k8s-version/serial/Stop 8.42
337 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.46
338 TestStartStop/group/old-k8s-version/serial/SecondStart 403.41
339 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
341 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
342 TestStartStop/group/no-preload/serial/Pause 1.92
344 TestStartStop/group/embed-certs/serial/FirstStart 78.8
345 TestStartStop/group/embed-certs/serial/DeployApp 8.19
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
347 TestStartStop/group/embed-certs/serial/Stop 8.42
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
349 TestStartStop/group/embed-certs/serial/SecondStart 293.25
350 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
352 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
353 TestStartStop/group/old-k8s-version/serial/Pause 1.92
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.41
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.02
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.93
366 TestStartStop/group/newest-cni/serial/FirstStart 42.56
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
369 TestStartStop/group/newest-cni/serial/Stop 8.48
370 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
373 TestStartStop/group/newest-cni/serial/SecondStart 29.2
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
375 TestStartStop/group/embed-certs/serial/Pause 1.92
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
379 TestStartStop/group/newest-cni/serial/Pause 1.88
x
+
TestDownloadOnly/v1.20.0/json-events (14.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-581000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-581000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (14.291196777s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-581000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-581000: exit status 85 (313.63865ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-581000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT |          |
	|         | -p download-only-581000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:19:51
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:19:51.256357    1651 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:19:51.256648    1651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:51.256653    1651 out.go:358] Setting ErrFile to fd 2...
	I0916 03:19:51.256657    1651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:19:51.256820    1651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	W0916 03:19:51.256914    1651 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19651-1087/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19651-1087/.minikube/config/config.json: no such file or directory
	I0916 03:19:51.258684    1651 out.go:352] Setting JSON to true
	I0916 03:19:51.281538    1651 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1162,"bootTime":1726480829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:19:51.281685    1651 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:19:51.303985    1651 out.go:97] [download-only-581000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:19:51.304142    1651 notify.go:220] Checking for updates...
	W0916 03:19:51.304160    1651 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 03:19:51.325610    1651 out.go:169] MINIKUBE_LOCATION=19651
	I0916 03:19:51.346966    1651 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:19:51.368907    1651 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:19:51.390988    1651 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:19:51.412885    1651 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	W0916 03:19:51.454837    1651 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 03:19:51.455314    1651 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:19:51.509035    1651 out.go:97] Using the hyperkit driver based on user configuration
	I0916 03:19:51.509091    1651 start.go:297] selected driver: hyperkit
	I0916 03:19:51.509104    1651 start.go:901] validating driver "hyperkit" against <nil>
	I0916 03:19:51.509308    1651 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:19:51.509686    1651 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:19:51.909794    1651 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:19:51.914687    1651 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:19:51.914705    1651 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:19:51.914732    1651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:19:51.919261    1651 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0916 03:19:51.919426    1651 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 03:19:51.919454    1651 cni.go:84] Creating CNI manager for ""
	I0916 03:19:51.919494    1651 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 03:19:51.919561    1651 start.go:340] cluster config:
	{Name:download-only-581000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-581000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:19:51.919789    1651 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:19:51.941248    1651 out.go:97] Downloading VM boot image ...
	I0916 03:19:51.941310    1651 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/iso/amd64/minikube-v1.34.0-1726415472-19646-amd64.iso
	I0916 03:19:57.661831    1651 out.go:97] Starting "download-only-581000" primary control-plane node in "download-only-581000" cluster
	I0916 03:19:57.661868    1651 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:57.718082    1651 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0916 03:19:57.718107    1651 cache.go:56] Caching tarball of preloaded images
	I0916 03:19:57.718376    1651 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 03:19:57.739941    1651 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 03:19:57.739969    1651 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 03:19:57.820045    1651 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-581000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-581000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-581000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-537000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-537000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperkit : (8.360563942s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-537000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-537000: exit status 85 (291.867941ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-581000 | jenkins | v1.34.0 | 16 Sep 24 03:19 PDT |                     |
	|         | -p download-only-581000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:20 PDT |
	| delete  | -p download-only-581000        | download-only-581000 | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT | 16 Sep 24 03:20 PDT |
	| start   | -o=json --download-only        | download-only-537000 | jenkins | v1.34.0 | 16 Sep 24 03:20 PDT |                     |
	|         | -p download-only-537000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 03:20:06
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 03:20:06.305556    1676 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:20:06.305816    1676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:06.305821    1676 out.go:358] Setting ErrFile to fd 2...
	I0916 03:20:06.305824    1676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:20:06.305986    1676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:20:06.307473    1676 out.go:352] Setting JSON to true
	I0916 03:20:06.329608    1676 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1177,"bootTime":1726480829,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:20:06.329704    1676 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:20:06.351848    1676 out.go:97] [download-only-537000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:20:06.352004    1676 notify.go:220] Checking for updates...
	I0916 03:20:06.373630    1676 out.go:169] MINIKUBE_LOCATION=19651
	I0916 03:20:06.394645    1676 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:20:06.415586    1676 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:20:06.436957    1676 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:20:06.458762    1676 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	W0916 03:20:06.500672    1676 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 03:20:06.501117    1676 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:20:06.531799    1676 out.go:97] Using the hyperkit driver based on user configuration
	I0916 03:20:06.531847    1676 start.go:297] selected driver: hyperkit
	I0916 03:20:06.531860    1676 start.go:901] validating driver "hyperkit" against <nil>
	I0916 03:20:06.532079    1676 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:20:06.532385    1676 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19651-1087/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0916 03:20:06.542043    1676 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0916 03:20:06.545916    1676 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:20:06.545934    1676 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0916 03:20:06.545958    1676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 03:20:06.548640    1676 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0916 03:20:06.548870    1676 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 03:20:06.548911    1676 cni.go:84] Creating CNI manager for ""
	I0916 03:20:06.548956    1676 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 03:20:06.548964    1676 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 03:20:06.549027    1676 start.go:340] cluster config:
	{Name:download-only-537000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-537000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:20:06.549115    1676 iso.go:125] acquiring lock: {Name:mk3cc3bfc16e1cdaf9f6d8aeb3159dc7f0908ac6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 03:20:06.570343    1676 out.go:97] Starting "download-only-537000" primary control-plane node in "download-only-537000" cluster
	I0916 03:20:06.570382    1676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:20:06.618857    1676 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 03:20:06.618920    1676 cache.go:56] Caching tarball of preloaded images
	I0916 03:20:06.619321    1676 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 03:20:06.640680    1676 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 03:20:06.640724    1676 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0916 03:20:06.714111    1676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /Users/jenkins/minikube-integration/19651-1087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-537000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-537000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-537000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-100000 --alsologtostderr --binary-mirror http://127.0.0.1:49642 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-100000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-100000
--- PASS: TestBinaryMirror (0.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-981000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-981000: exit status 85 (229.604849ms)

                                                
                                                
-- stdout --
	* Profile "addons-981000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-981000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-981000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-981000: exit status 85 (167.535844ms)

                                                
                                                
-- stdout --
	* Profile "addons-981000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-981000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.05s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.05s)

                                                
                                    
x
+
TestErrorSpam/setup (35.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-704000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-704000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 --driver=hyperkit : (35.911854668s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (35.91s)

                                                
                                    
x
+
TestErrorSpam/start (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 start --dry-run
--- PASS: TestErrorSpam/start (1.74s)

                                                
                                    
x
+
TestErrorSpam/status (0.52s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 status
--- PASS: TestErrorSpam/status (0.52s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (155.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 stop: (5.408650033s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 stop: (1m15.230518886s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-704000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-704000 stop: (1m15.22564599s)
--- PASS: TestErrorSpam/stop (155.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19651-1087/.minikube/files/etc/test/nested/copy/1649/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-318000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-318000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m20.919759937s)
--- PASS: TestFunctional/serial/StartWithProxy (80.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-318000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-318000 --alsologtostderr -v=8: (42.600599897s)
functional_test.go:663: soft start took 42.601076952s for "functional-318000" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-318000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-318000 cache add registry.k8s.io/pause:3.1: (1.073831813s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-318000 cache add registry.k8s.io/pause:3.3: (1.012824017s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local481189470/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cache add minikube-local-cache-test:functional-318000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cache delete minikube-local-cache-test:functional-318000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-318000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (154.697892ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 kubectl -- --context functional-318000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-318000 kubectl -- --context functional-318000 get pods: (1.199946767s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-318000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-318000 get pods: (1.577494494s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.58s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-318000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-318000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.581933173s)
functional_test.go:761: restart took 39.582111812s for "functional-318000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-318000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-318000 logs: (2.633776721s)
--- PASS: TestFunctional/serial/LogsCmd (2.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3654451126/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-318000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3654451126/001/logs.txt: (2.654119024s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-318000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-318000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-318000: exit status 115 (265.599414ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30200 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-318000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 config get cpus: exit status 14 (68.53921ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 config get cpus: exit status 14 (55.203376ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-318000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-318000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2601: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-318000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-318000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (648.841389ms)

                                                
                                                
-- stdout --
	* [functional-318000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:28:31.568867    2315 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:28:31.569039    2315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:28:31.569045    2315 out.go:358] Setting ErrFile to fd 2...
	I0916 03:28:31.569048    2315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:28:31.569237    2315 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:28:31.570587    2315 out.go:352] Setting JSON to false
	I0916 03:28:31.593187    2315 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1682,"bootTime":1726480829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:28:31.593339    2315 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:28:31.614727    2315 out.go:177] * [functional-318000] minikube v1.34.0 on Darwin 14.6.1
	I0916 03:28:31.656739    2315 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:28:31.656865    2315 notify.go:220] Checking for updates...
	I0916 03:28:31.698409    2315 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:28:31.740377    2315 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:28:31.803448    2315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:28:31.824564    2315 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:28:31.845582    2315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:28:31.866817    2315 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:28:31.867166    2315 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:28:31.867204    2315 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:28:31.876209    2315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50283
	I0916 03:28:31.876566    2315 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:28:31.876978    2315 main.go:141] libmachine: Using API Version  1
	I0916 03:28:31.876993    2315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:28:31.877246    2315 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:28:31.877367    2315 main.go:141] libmachine: (functional-318000) Calling .DriverName
	I0916 03:28:31.877548    2315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:28:31.877836    2315 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:28:31.877856    2315 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:28:31.886495    2315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50285
	I0916 03:28:31.886865    2315 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:28:31.887226    2315 main.go:141] libmachine: Using API Version  1
	I0916 03:28:31.887245    2315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:28:31.887470    2315 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:28:31.887583    2315 main.go:141] libmachine: (functional-318000) Calling .DriverName
	I0916 03:28:31.932573    2315 out.go:177] * Using the hyperkit driver based on existing profile
	I0916 03:28:32.006571    2315 start.go:297] selected driver: hyperkit
	I0916 03:28:32.006630    2315 start.go:901] validating driver "hyperkit" against &{Name:functional-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:28:32.006795    2315 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:28:32.047790    2315 out.go:201] 
	W0916 03:28:32.068596    2315 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 03:28:32.089481    2315 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-318000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-318000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-318000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (649.003067ms)

                                                
                                                
-- stdout --
	* [functional-318000] minikube v1.34.0 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:28:30.911627    2308 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:28:30.911774    2308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:28:30.911779    2308 out.go:358] Setting ErrFile to fd 2...
	I0916 03:28:30.911783    2308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:28:30.911956    2308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:28:30.913528    2308 out.go:352] Setting JSON to false
	I0916 03:28:30.936338    2308 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1681,"bootTime":1726480829,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0916 03:28:30.936430    2308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0916 03:28:30.958227    2308 out.go:177] * [functional-318000] minikube v1.34.0 sur Darwin 14.6.1
	I0916 03:28:30.999818    2308 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 03:28:30.999881    2308 notify.go:220] Checking for updates...
	I0916 03:28:31.057841    2308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	I0916 03:28:31.115684    2308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0916 03:28:31.189688    2308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 03:28:31.249664    2308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	I0916 03:28:31.291614    2308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 03:28:31.313192    2308 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:28:31.313691    2308 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:28:31.313758    2308 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:28:31.323048    2308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50278
	I0916 03:28:31.323437    2308 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:28:31.323827    2308 main.go:141] libmachine: Using API Version  1
	I0916 03:28:31.323860    2308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:28:31.324106    2308 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:28:31.324231    2308 main.go:141] libmachine: (functional-318000) Calling .DriverName
	I0916 03:28:31.324433    2308 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 03:28:31.324699    2308 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:28:31.324728    2308 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:28:31.333162    2308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50280
	I0916 03:28:31.333541    2308 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:28:31.333844    2308 main.go:141] libmachine: Using API Version  1
	I0916 03:28:31.333856    2308 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:28:31.334069    2308 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:28:31.334202    2308 main.go:141] libmachine: (functional-318000) Calling .DriverName
	I0916 03:28:31.363753    2308 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0916 03:28:31.404444    2308 start.go:297] selected driver: hyperkit
	I0916 03:28:31.404472    2308 start.go:901] validating driver "hyperkit" against &{Name:functional-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:functional-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 03:28:31.404683    2308 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 03:28:31.429989    2308 out.go:201] 
	W0916 03:28:31.450672    2308 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 03:28:31.471585    2308 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-318000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-318000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cznhw" [1e966bff-7437-4a74-8c3b-ba3d6848ac4b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-cznhw" [1e966bff-7437-4a74-8c3b-ba3d6848ac4b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.006898173s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31277
functional_test.go:1675: http://192.169.0.4:31277: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-cznhw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31277
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.38s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2510be40-fd89-4b4b-8b52-1c05dfb5fa9b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003987919s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-318000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-318000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-318000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-318000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [19a32f14-44b9-4fc0-af22-39656700bc9e] Pending
helpers_test.go:344: "sp-pod" [19a32f14-44b9-4fc0-af22-39656700bc9e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [19a32f14-44b9-4fc0-af22-39656700bc9e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003604129s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-318000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-318000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-318000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14a2484e-1e41-4f17-9e2b-5f412843299f] Pending
helpers_test.go:344: "sp-pod" [14a2484e-1e41-4f17-9e2b-5f412843299f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14a2484e-1e41-4f17-9e2b-5f412843299f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003778376s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-318000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh -n functional-318000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cp functional-318000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd173580016/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh -n functional-318000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh -n functional-318000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-318000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-np7nr" [2deb511d-6bbc-44fe-97e0-02e621f837a2] Pending
helpers_test.go:344: "mysql-6cdb49bbb-np7nr" [2deb511d-6bbc-44fe-97e0-02e621f837a2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-np7nr" [2deb511d-6bbc-44fe-97e0-02e621f837a2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.002759874s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-318000 exec mysql-6cdb49bbb-np7nr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-318000 exec mysql-6cdb49bbb-np7nr -- mysql -ppassword -e "show databases;": exit status 1 (121.438975ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-318000 exec mysql-6cdb49bbb-np7nr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-318000 exec mysql-6cdb49bbb-np7nr -- mysql -ppassword -e "show databases;": exit status 1 (157.008753ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-318000 exec mysql-6cdb49bbb-np7nr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1649/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo cat /etc/test/nested/copy/1649/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1649.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo cat /etc/ssl/certs/1649.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1649.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo cat /usr/share/ca-certificates/1649.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo cat /etc/ssl/certs/16492.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo cat /usr/share/ca-certificates/16492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-318000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh "sudo systemctl is-active crio": exit status 1 (168.295692ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-318000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-318000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-318000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-318000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2140: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-318000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-318000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [793ea8b6-f865-4867-a0d6-4521ad8a5d9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [793ea8b6-f865-4867-a0d6-4521ad8a5d9a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002081354s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-318000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.188.220 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-318000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-318000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-318000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-6fqxc" [bbe85a28-2ab8-4b73-af54-ac8e5ba88757] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-6fqxc" [bbe85a28-2ab8-4b73-af54-ac8e5ba88757] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004743762s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "180.665609ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "77.598914ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "174.651534ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "78.734314ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port155249840/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726482506651563000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port155249840/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726482506651563000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port155249840/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726482506651563000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port155249840/001/test-1726482506651563000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (153.904656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 10:28 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 10:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 10:28 test-1726482506651563000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh cat /mount-9p/test-1726482506651563000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-318000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e3c2c697-209b-4a89-9969-cc78f3a74d88] Pending
helpers_test.go:344: "busybox-mount" [e3c2c697-209b-4a89-9969-cc78f3a74d88] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e3c2c697-209b-4a89-9969-cc78f3a74d88] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e3c2c697-209b-4a89-9969-cc78f3a74d88] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003388042s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-318000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port155249840/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 service list -o json
functional_test.go:1494: Took "372.533902ms" to run "out/minikube-darwin-amd64 -p functional-318000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31936
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31936
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-318000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-318000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-318000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-318000 image ls --format short --alsologtostderr:
I0916 03:29:03.879111    2624 out.go:345] Setting OutFile to fd 1 ...
I0916 03:29:03.879679    2624 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:03.879689    2624 out.go:358] Setting ErrFile to fd 2...
I0916 03:29:03.879695    2624 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:03.880083    2624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
I0916 03:29:03.880903    2624 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:03.880997    2624 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:03.881367    2624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:03.881401    2624 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:03.889790    2624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50617
I0916 03:29:03.890223    2624 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:03.890610    2624 main.go:141] libmachine: Using API Version  1
I0916 03:29:03.890619    2624 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:03.890823    2624 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:03.890935    2624 main.go:141] libmachine: (functional-318000) Calling .GetState
I0916 03:29:03.891023    2624 main.go:141] libmachine: (functional-318000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0916 03:29:03.891083    2624 main.go:141] libmachine: (functional-318000) DBG | hyperkit pid from json: 1904
I0916 03:29:03.892325    2624 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:03.892346    2624 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:03.900579    2624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50619
I0916 03:29:03.900925    2624 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:03.901299    2624 main.go:141] libmachine: Using API Version  1
I0916 03:29:03.901321    2624 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:03.901566    2624 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:03.901680    2624 main.go:141] libmachine: (functional-318000) Calling .DriverName
I0916 03:29:03.901844    2624 ssh_runner.go:195] Run: systemctl --version
I0916 03:29:03.901862    2624 main.go:141] libmachine: (functional-318000) Calling .GetSSHHostname
I0916 03:29:03.901943    2624 main.go:141] libmachine: (functional-318000) Calling .GetSSHPort
I0916 03:29:03.902025    2624 main.go:141] libmachine: (functional-318000) Calling .GetSSHKeyPath
I0916 03:29:03.902107    2624 main.go:141] libmachine: (functional-318000) Calling .GetSSHUsername
I0916 03:29:03.902196    2624 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/functional-318000/id_rsa Username:docker}
I0916 03:29:03.935703    2624 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 03:29:03.957763    2624 main.go:141] libmachine: Making call to close driver server
I0916 03:29:03.957771    2624 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:03.957928    2624 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:03.957931    2624 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:03.957939    2624 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 03:29:03.957947    2624 main.go:141] libmachine: Making call to close driver server
I0916 03:29:03.957951    2624 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:03.958081    2624 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:03.958085    2624 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:03.958089    2624 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-318000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-318000 | 16dca570a2746 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-318000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-318000 image ls --format table --alsologtostderr:
I0916 03:29:04.758630    2646 out.go:345] Setting OutFile to fd 1 ...
I0916 03:29:04.758815    2646 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.758821    2646 out.go:358] Setting ErrFile to fd 2...
I0916 03:29:04.758825    2646 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.758991    2646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
I0916 03:29:04.759640    2646 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.759738    2646 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.760085    2646 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.760127    2646 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.768421    2646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50646
I0916 03:29:04.768844    2646 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.769240    2646 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.769273    2646 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.769517    2646 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.769643    2646 main.go:141] libmachine: (functional-318000) Calling .GetState
I0916 03:29:04.769726    2646 main.go:141] libmachine: (functional-318000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0916 03:29:04.769792    2646 main.go:141] libmachine: (functional-318000) DBG | hyperkit pid from json: 1904
I0916 03:29:04.771019    2646 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.771043    2646 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.779555    2646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50648
I0916 03:29:04.779919    2646 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.780259    2646 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.780271    2646 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.780488    2646 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.780601    2646 main.go:141] libmachine: (functional-318000) Calling .DriverName
I0916 03:29:04.780773    2646 ssh_runner.go:195] Run: systemctl --version
I0916 03:29:04.780790    2646 main.go:141] libmachine: (functional-318000) Calling .GetSSHHostname
I0916 03:29:04.780861    2646 main.go:141] libmachine: (functional-318000) Calling .GetSSHPort
I0916 03:29:04.780958    2646 main.go:141] libmachine: (functional-318000) Calling .GetSSHKeyPath
I0916 03:29:04.781048    2646 main.go:141] libmachine: (functional-318000) Calling .GetSSHUsername
I0916 03:29:04.781129    2646 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/functional-318000/id_rsa Username:docker}
I0916 03:29:04.815425    2646 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 03:29:04.833615    2646 main.go:141] libmachine: Making call to close driver server
I0916 03:29:04.833623    2646 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:04.833767    2646 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:04.833775    2646 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 03:29:04.833778    2646 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:04.833782    2646 main.go:141] libmachine: Making call to close driver server
I0916 03:29:04.833790    2646 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:04.833936    2646 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:04.833934    2646 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:04.833946    2646 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-318000 image ls --format json --alsologtostderr:
[{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-318000"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da",
"repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"16dca570a274646a969ff09e82d23abffb08638d0448124f9a1498ae515579f7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-318000"],"size":"30"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/m
ysql:5.7"],"size":"501000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"
id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-318000 image ls --format json --alsologtostderr:
I0916 03:29:04.598835    2642 out.go:345] Setting OutFile to fd 1 ...
I0916 03:29:04.599023    2642 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.599028    2642 out.go:358] Setting ErrFile to fd 2...
I0916 03:29:04.599032    2642 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.599227    2642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
I0916 03:29:04.599876    2642 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.599972    2642 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.600332    2642 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.600377    2642 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.608939    2642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50641
I0916 03:29:04.609357    2642 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.609758    2642 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.609790    2642 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.610024    2642 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.610133    2642 main.go:141] libmachine: (functional-318000) Calling .GetState
I0916 03:29:04.610222    2642 main.go:141] libmachine: (functional-318000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0916 03:29:04.610288    2642 main.go:141] libmachine: (functional-318000) DBG | hyperkit pid from json: 1904
I0916 03:29:04.611532    2642 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.611556    2642 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.619863    2642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50643
I0916 03:29:04.620187    2642 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.620530    2642 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.620545    2642 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.620768    2642 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.620878    2642 main.go:141] libmachine: (functional-318000) Calling .DriverName
I0916 03:29:04.621049    2642 ssh_runner.go:195] Run: systemctl --version
I0916 03:29:04.621068    2642 main.go:141] libmachine: (functional-318000) Calling .GetSSHHostname
I0916 03:29:04.621156    2642 main.go:141] libmachine: (functional-318000) Calling .GetSSHPort
I0916 03:29:04.621238    2642 main.go:141] libmachine: (functional-318000) Calling .GetSSHKeyPath
I0916 03:29:04.621315    2642 main.go:141] libmachine: (functional-318000) Calling .GetSSHUsername
I0916 03:29:04.621394    2642 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/functional-318000/id_rsa Username:docker}
I0916 03:29:04.656437    2642 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 03:29:04.678200    2642 main.go:141] libmachine: Making call to close driver server
I0916 03:29:04.678208    2642 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:04.678351    2642 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:04.678360    2642 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:04.678361    2642 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 03:29:04.678375    2642 main.go:141] libmachine: Making call to close driver server
I0916 03:29:04.678381    2642 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:04.678538    2642 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:04.678554    2642 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:04.678578    2642 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls --format yaml --alsologtostderr
2024/09/16 03:29:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-318000 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-318000
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 16dca570a274646a969ff09e82d23abffb08638d0448124f9a1498ae515579f7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-318000
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-318000 image ls --format yaml --alsologtostderr:
I0916 03:29:04.036298    2628 out.go:345] Setting OutFile to fd 1 ...
I0916 03:29:04.036492    2628 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.036497    2628 out.go:358] Setting ErrFile to fd 2...
I0916 03:29:04.036500    2628 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.036670    2628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
I0916 03:29:04.037262    2628 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.037358    2628 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.037702    2628 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.037753    2628 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.046078    2628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50622
I0916 03:29:04.046482    2628 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.046891    2628 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.046921    2628 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.047161    2628 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.047273    2628 main.go:141] libmachine: (functional-318000) Calling .GetState
I0916 03:29:04.047362    2628 main.go:141] libmachine: (functional-318000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0916 03:29:04.047427    2628 main.go:141] libmachine: (functional-318000) DBG | hyperkit pid from json: 1904
I0916 03:29:04.048647    2628 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.048668    2628 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.056914    2628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50624
I0916 03:29:04.057239    2628 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.057535    2628 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.057551    2628 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.057776    2628 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.057891    2628 main.go:141] libmachine: (functional-318000) Calling .DriverName
I0916 03:29:04.058050    2628 ssh_runner.go:195] Run: systemctl --version
I0916 03:29:04.058067    2628 main.go:141] libmachine: (functional-318000) Calling .GetSSHHostname
I0916 03:29:04.058145    2628 main.go:141] libmachine: (functional-318000) Calling .GetSSHPort
I0916 03:29:04.058226    2628 main.go:141] libmachine: (functional-318000) Calling .GetSSHKeyPath
I0916 03:29:04.058311    2628 main.go:141] libmachine: (functional-318000) Calling .GetSSHUsername
I0916 03:29:04.058397    2628 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/functional-318000/id_rsa Username:docker}
I0916 03:29:04.091370    2628 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 03:29:04.108549    2628 main.go:141] libmachine: Making call to close driver server
I0916 03:29:04.108562    2628 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:04.108699    2628 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:04.108706    2628 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:04.108714    2628 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 03:29:04.108723    2628 main.go:141] libmachine: Making call to close driver server
I0916 03:29:04.108728    2628 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:04.108903    2628 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:04.108908    2628 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:04.108930    2628 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh pgrep buildkitd: exit status 1 (128.084356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image build -t localhost/my-image:functional-318000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-318000 image build -t localhost/my-image:functional-318000 testdata/build --alsologtostderr: (2.217930786s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-318000 image build -t localhost/my-image:functional-318000 testdata/build --alsologtostderr:
I0916 03:29:04.349695    2638 out.go:345] Setting OutFile to fd 1 ...
I0916 03:29:04.369382    2638 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.369393    2638 out.go:358] Setting ErrFile to fd 2...
I0916 03:29:04.369398    2638 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 03:29:04.369643    2638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
I0916 03:29:04.370313    2638 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.370974    2638 config.go:182] Loaded profile config "functional-318000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 03:29:04.371342    2638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.371399    2638 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.379906    2638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50636
I0916 03:29:04.380329    2638 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.380742    2638 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.380754    2638 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.380975    2638 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.381086    2638 main.go:141] libmachine: (functional-318000) Calling .GetState
I0916 03:29:04.381170    2638 main.go:141] libmachine: (functional-318000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0916 03:29:04.381235    2638 main.go:141] libmachine: (functional-318000) DBG | hyperkit pid from json: 1904
I0916 03:29:04.382452    2638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0916 03:29:04.382490    2638 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0916 03:29:04.390858    2638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50638
I0916 03:29:04.391212    2638 main.go:141] libmachine: () Calling .GetVersion
I0916 03:29:04.391520    2638 main.go:141] libmachine: Using API Version  1
I0916 03:29:04.391531    2638 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 03:29:04.391727    2638 main.go:141] libmachine: () Calling .GetMachineName
I0916 03:29:04.391839    2638 main.go:141] libmachine: (functional-318000) Calling .DriverName
I0916 03:29:04.391983    2638 ssh_runner.go:195] Run: systemctl --version
I0916 03:29:04.392000    2638 main.go:141] libmachine: (functional-318000) Calling .GetSSHHostname
I0916 03:29:04.392075    2638 main.go:141] libmachine: (functional-318000) Calling .GetSSHPort
I0916 03:29:04.392152    2638 main.go:141] libmachine: (functional-318000) Calling .GetSSHKeyPath
I0916 03:29:04.392230    2638 main.go:141] libmachine: (functional-318000) Calling .GetSSHUsername
I0916 03:29:04.392317    2638 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/functional-318000/id_rsa Username:docker}
I0916 03:29:04.425499    2638 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1421292563.tar
I0916 03:29:04.425578    2638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 03:29:04.434348    2638 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1421292563.tar
I0916 03:29:04.437662    2638 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1421292563.tar: stat -c "%s %y" /var/lib/minikube/build/build.1421292563.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1421292563.tar': No such file or directory
I0916 03:29:04.437689    2638 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1421292563.tar --> /var/lib/minikube/build/build.1421292563.tar (3072 bytes)
I0916 03:29:04.458466    2638 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1421292563
I0916 03:29:04.466824    2638 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1421292563 -xf /var/lib/minikube/build/build.1421292563.tar
I0916 03:29:04.475301    2638 docker.go:360] Building image: /var/lib/minikube/build/build.1421292563
I0916 03:29:04.475379    2638 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-318000 /var/lib/minikube/build/build.1421292563
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:62a797956f45358abc29b4322efe2281eba7b420599d0462122007763f7cf859 done
#8 naming to localhost/my-image:functional-318000 done
#8 DONE 0.0s
I0916 03:29:06.472336    2638 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-318000 /var/lib/minikube/build/build.1421292563: (1.996975953s)
I0916 03:29:06.472404    2638 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1421292563
I0916 03:29:06.481053    2638 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1421292563.tar
I0916 03:29:06.489188    2638 build_images.go:217] Built localhost/my-image:functional-318000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1421292563.tar
I0916 03:29:06.489214    2638 build_images.go:133] succeeded building to: functional-318000
I0916 03:29:06.489219    2638 build_images.go:134] failed building to: 
I0916 03:29:06.489238    2638 main.go:141] libmachine: Making call to close driver server
I0916 03:29:06.489245    2638 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:06.489405    2638 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:06.489416    2638 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 03:29:06.489424    2638 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:06.489426    2638 main.go:141] libmachine: Making call to close driver server
I0916 03:29:06.489436    2638 main.go:141] libmachine: (functional-318000) Calling .Close
I0916 03:29:06.489607    2638 main.go:141] libmachine: (functional-318000) DBG | Closing plugin on server side
I0916 03:29:06.489666    2638 main.go:141] libmachine: Successfully made call to close driver server
I0916 03:29:06.489690    2638 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.970454916s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-318000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2602940340/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.903717ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2602940340/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh "sudo umount -f /mount-9p": exit status 1 (148.660336ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-318000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2602940340/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1370444656/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1370444656/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1370444656/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T" /mount1: exit status 1 (204.35438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T" /mount1: exit status 1 (180.741109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-318000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1370444656/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1370444656/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-318000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1370444656/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image load --daemon kicbase/echo-server:functional-318000 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-darwin-amd64 -p functional-318000 image load --daemon kicbase/echo-server:functional-318000 --alsologtostderr: (1.495206603s)
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image load --daemon kicbase/echo-server:functional-318000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-318000 docker-env) && out/minikube-darwin-amd64 status -p functional-318000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-318000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-318000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image load --daemon kicbase/echo-server:functional-318000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image save kicbase/echo-server:functional-318000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image rm kicbase/echo-server:functional-318000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-318000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-318000 image save --daemon kicbase/echo-server:functional-318000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-318000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-318000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-318000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-318000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (69.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-632000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-632000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-632000 -v=7 --alsologtostderr: (8.376361044s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-632000 --wait=true -v=7 --alsologtostderr
E0916 03:32:58.365829    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:58.374136    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:58.385655    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:58.408752    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:58.450873    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:58.533101    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:58.694956    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:59.016502    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:32:59.658894    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:33:00.941050    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:33:03.502652    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:33:08.624847    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:33:18.867079    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 03:33:39.350263    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-632000 --wait=true -v=7 --alsologtostderr: (1m0.860702545s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-632000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (69.35s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (36.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-436000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-436000 --driver=hyperkit : (36.421143585s)
--- PASS: TestImageBuild/serial/Setup (36.42s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-436000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-436000: (1.426038527s)
--- PASS: TestImageBuild/serial/NormalBuild (1.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-436000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-436000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-436000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-578000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0916 03:42:58.459386    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-578000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (50.701781133s)
--- PASS: TestJSONOutput/start/Command (50.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-578000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-578000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-578000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-578000 --output=json --user=testUser: (8.328837628s)
--- PASS: TestJSONOutput/stop/Command (8.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-844000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-844000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (368.940857ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3464b3c6-0278-4916-a322-3c1320b5efff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-844000] minikube v1.34.0 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc14e052-5a8a-48ac-ba9b-4e3627c3246f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"44099178-9cfa-4bed-804d-c81d96663851","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig"}}
	{"specversion":"1.0","id":"08371ad6-02ef-46ad-bf6f-bb5c8237b264","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"71c66f64-fbd5-4d67-9e4a-b3ad7af1bda6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b601693d-eba7-47d3-b662-8f59776f6da5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube"}}
	{"specversion":"1.0","id":"fb6054bc-c370-4298-b1fe-8ed1a3228722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a5e0108c-7377-46d8-9670-bc0c90196d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-844000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-844000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (85.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-606000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-606000 --driver=hyperkit : (36.034744948s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-617000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-617000 --driver=hyperkit : (38.065029019s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-606000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-617000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-617000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-617000: (5.263406097s)
helpers_test.go:175: Cleaning up "first-606000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-606000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-606000: (5.301871555s)
--- PASS: TestMinikubeProfile (85.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-629000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0916 03:47:58.458770    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-629000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m46.563306384s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-629000 -- rollout status deployment/busybox: (4.017437212s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-8lgwm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-w7225 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-8lgwm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-w7225 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-8lgwm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-w7225 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-8lgwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-8lgwm -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-w7225 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-629000 -- exec busybox-7dff88458-w7225 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-629000 -v 3 --alsologtostderr
E0916 03:49:21.531966    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-629000 -v 3 --alsologtostderr: (45.72411539s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-629000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp testdata/cp-test.txt multinode-629000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1275681705/001/cp-test_multinode-629000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000:/home/docker/cp-test.txt multinode-629000-m02:/home/docker/cp-test_multinode-629000_multinode-629000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m02 "sudo cat /home/docker/cp-test_multinode-629000_multinode-629000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000:/home/docker/cp-test.txt multinode-629000-m03:/home/docker/cp-test_multinode-629000_multinode-629000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m03 "sudo cat /home/docker/cp-test_multinode-629000_multinode-629000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp testdata/cp-test.txt multinode-629000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1275681705/001/cp-test_multinode-629000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000-m02:/home/docker/cp-test.txt multinode-629000:/home/docker/cp-test_multinode-629000-m02_multinode-629000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000 "sudo cat /home/docker/cp-test_multinode-629000-m02_multinode-629000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000-m02:/home/docker/cp-test.txt multinode-629000-m03:/home/docker/cp-test_multinode-629000-m02_multinode-629000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m03 "sudo cat /home/docker/cp-test_multinode-629000-m02_multinode-629000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp testdata/cp-test.txt multinode-629000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1275681705/001/cp-test_multinode-629000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000-m03:/home/docker/cp-test.txt multinode-629000:/home/docker/cp-test_multinode-629000-m03_multinode-629000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000 "sudo cat /home/docker/cp-test_multinode-629000-m03_multinode-629000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 cp multinode-629000-m03:/home/docker/cp-test.txt multinode-629000-m02:/home/docker/cp-test_multinode-629000-m03_multinode-629000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 ssh -n multinode-629000-m02 "sudo cat /home/docker/cp-test_multinode-629000-m03_multinode-629000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-629000 node stop m03: (2.373359681s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-629000 status: exit status 7 (257.376077ms)

                                                
                                                
-- stdout --
	multinode-629000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr: exit status 7 (257.298705ms)

                                                
                                                
-- stdout --
	multinode-629000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:49:53.221462    4109 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:49:53.221646    4109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:49:53.221651    4109 out.go:358] Setting ErrFile to fd 2...
	I0916 03:49:53.221654    4109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:49:53.221830    4109 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:49:53.222016    4109 out.go:352] Setting JSON to false
	I0916 03:49:53.222039    4109 mustload.go:65] Loading cluster: multinode-629000
	I0916 03:49:53.222093    4109 notify.go:220] Checking for updates...
	I0916 03:49:53.222388    4109 config.go:182] Loaded profile config "multinode-629000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:49:53.222401    4109 status.go:255] checking status of multinode-629000 ...
	I0916 03:49:53.222830    4109 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:49:53.222875    4109 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:49:53.231870    4109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51973
	I0916 03:49:53.232204    4109 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:49:53.232611    4109 main.go:141] libmachine: Using API Version  1
	I0916 03:49:53.232629    4109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:49:53.232845    4109 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:49:53.232945    4109 main.go:141] libmachine: (multinode-629000) Calling .GetState
	I0916 03:49:53.233025    4109 main.go:141] libmachine: (multinode-629000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:49:53.233092    4109 main.go:141] libmachine: (multinode-629000) DBG | hyperkit pid from json: 3747
	I0916 03:49:53.234274    4109 status.go:330] multinode-629000 host status = "Running" (err=<nil>)
	I0916 03:49:53.234293    4109 host.go:66] Checking if "multinode-629000" exists ...
	I0916 03:49:53.234548    4109 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:49:53.234568    4109 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:49:53.242939    4109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51975
	I0916 03:49:53.243283    4109 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:49:53.243662    4109 main.go:141] libmachine: Using API Version  1
	I0916 03:49:53.243686    4109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:49:53.243920    4109 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:49:53.244038    4109 main.go:141] libmachine: (multinode-629000) Calling .GetIP
	I0916 03:49:53.244116    4109 host.go:66] Checking if "multinode-629000" exists ...
	I0916 03:49:53.244386    4109 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:49:53.244415    4109 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:49:53.252847    4109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51977
	I0916 03:49:53.253165    4109 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:49:53.253500    4109 main.go:141] libmachine: Using API Version  1
	I0916 03:49:53.253512    4109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:49:53.253710    4109 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:49:53.253816    4109 main.go:141] libmachine: (multinode-629000) Calling .DriverName
	I0916 03:49:53.253953    4109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:49:53.253972    4109 main.go:141] libmachine: (multinode-629000) Calling .GetSSHHostname
	I0916 03:49:53.254078    4109 main.go:141] libmachine: (multinode-629000) Calling .GetSSHPort
	I0916 03:49:53.254161    4109 main.go:141] libmachine: (multinode-629000) Calling .GetSSHKeyPath
	I0916 03:49:53.254269    4109 main.go:141] libmachine: (multinode-629000) Calling .GetSSHUsername
	I0916 03:49:53.254352    4109 sshutil.go:53] new ssh client: &{IP:192.169.0.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/multinode-629000/id_rsa Username:docker}
	I0916 03:49:53.288417    4109 ssh_runner.go:195] Run: systemctl --version
	I0916 03:49:53.292890    4109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:49:53.305013    4109 kubeconfig.go:125] found "multinode-629000" server: "https://192.169.0.11:8443"
	I0916 03:49:53.305038    4109 api_server.go:166] Checking apiserver status ...
	I0916 03:49:53.305080    4109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 03:49:53.316818    4109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1936/cgroup
	W0916 03:49:53.324840    4109 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1936/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 03:49:53.324890    4109 ssh_runner.go:195] Run: ls
	I0916 03:49:53.328591    4109 api_server.go:253] Checking apiserver healthz at https://192.169.0.11:8443/healthz ...
	I0916 03:49:53.332058    4109 api_server.go:279] https://192.169.0.11:8443/healthz returned 200:
	ok
	I0916 03:49:53.332069    4109 status.go:422] multinode-629000 apiserver status = Running (err=<nil>)
	I0916 03:49:53.332077    4109 status.go:257] multinode-629000 status: &{Name:multinode-629000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:49:53.332088    4109 status.go:255] checking status of multinode-629000-m02 ...
	I0916 03:49:53.332341    4109 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:49:53.332362    4109 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:49:53.341311    4109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51981
	I0916 03:49:53.341661    4109 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:49:53.342028    4109 main.go:141] libmachine: Using API Version  1
	I0916 03:49:53.342042    4109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:49:53.342250    4109 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:49:53.342353    4109 main.go:141] libmachine: (multinode-629000-m02) Calling .GetState
	I0916 03:49:53.342424    4109 main.go:141] libmachine: (multinode-629000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:49:53.342497    4109 main.go:141] libmachine: (multinode-629000-m02) DBG | hyperkit pid from json: 3787
	I0916 03:49:53.343657    4109 status.go:330] multinode-629000-m02 host status = "Running" (err=<nil>)
	I0916 03:49:53.343675    4109 host.go:66] Checking if "multinode-629000-m02" exists ...
	I0916 03:49:53.343936    4109 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:49:53.343960    4109 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:49:53.352404    4109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51983
	I0916 03:49:53.352751    4109 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:49:53.353058    4109 main.go:141] libmachine: Using API Version  1
	I0916 03:49:53.353068    4109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:49:53.353260    4109 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:49:53.353367    4109 main.go:141] libmachine: (multinode-629000-m02) Calling .GetIP
	I0916 03:49:53.353465    4109 host.go:66] Checking if "multinode-629000-m02" exists ...
	I0916 03:49:53.353723    4109 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:49:53.353742    4109 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:49:53.362086    4109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51985
	I0916 03:49:53.362464    4109 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:49:53.362790    4109 main.go:141] libmachine: Using API Version  1
	I0916 03:49:53.362799    4109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:49:53.363032    4109 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:49:53.363158    4109 main.go:141] libmachine: (multinode-629000-m02) Calling .DriverName
	I0916 03:49:53.363304    4109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 03:49:53.363315    4109 main.go:141] libmachine: (multinode-629000-m02) Calling .GetSSHHostname
	I0916 03:49:53.363402    4109 main.go:141] libmachine: (multinode-629000-m02) Calling .GetSSHPort
	I0916 03:49:53.363477    4109 main.go:141] libmachine: (multinode-629000-m02) Calling .GetSSHKeyPath
	I0916 03:49:53.363567    4109 main.go:141] libmachine: (multinode-629000-m02) Calling .GetSSHUsername
	I0916 03:49:53.363653    4109 sshutil.go:53] new ssh client: &{IP:192.169.0.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19651-1087/.minikube/machines/multinode-629000-m02/id_rsa Username:docker}
	I0916 03:49:53.400038    4109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 03:49:53.411349    4109 status.go:257] multinode-629000-m02 status: &{Name:multinode-629000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:49:53.411365    4109 status.go:255] checking status of multinode-629000-m03 ...
	I0916 03:49:53.411636    4109 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:49:53.411659    4109 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:49:53.420281    4109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51988
	I0916 03:49:53.420657    4109 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:49:53.420981    4109 main.go:141] libmachine: Using API Version  1
	I0916 03:49:53.420992    4109 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:49:53.421196    4109 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:49:53.421313    4109 main.go:141] libmachine: (multinode-629000-m03) Calling .GetState
	I0916 03:49:53.421399    4109 main.go:141] libmachine: (multinode-629000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:49:53.421468    4109 main.go:141] libmachine: (multinode-629000-m03) DBG | hyperkit pid from json: 3878
	I0916 03:49:53.422617    4109 main.go:141] libmachine: (multinode-629000-m03) DBG | hyperkit pid 3878 missing from process table
	I0916 03:49:53.422665    4109 status.go:330] multinode-629000-m03 host status = "Stopped" (err=<nil>)
	I0916 03:49:53.422677    4109 status.go:343] host is not running, skipping remaining checks
	I0916 03:49:53.422683    4109 status.go:257] multinode-629000-m03 status: &{Name:multinode-629000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.89s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-629000 node start m03 -v=7 --alsologtostderr: (41.270259235s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (191.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-629000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-629000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-629000: (18.855474585s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-629000 --wait=true -v=8 --alsologtostderr
E0916 03:52:58.461041    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-629000 --wait=true -v=8 --alsologtostderr: (2m52.808491698s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-629000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (191.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-629000 node delete m03: (2.928252939s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-629000 stop: (16.640953306s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-629000 status: exit status 7 (80.474736ms)

                                                
                                                
-- stdout --
	multinode-629000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-629000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr: exit status 7 (79.55916ms)

                                                
                                                
-- stdout --
	multinode-629000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-629000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 03:54:06.899481    4371 out.go:345] Setting OutFile to fd 1 ...
	I0916 03:54:06.899664    4371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:54:06.899669    4371 out.go:358] Setting ErrFile to fd 2...
	I0916 03:54:06.899673    4371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 03:54:06.899856    4371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19651-1087/.minikube/bin
	I0916 03:54:06.900041    4371 out.go:352] Setting JSON to false
	I0916 03:54:06.900063    4371 mustload.go:65] Loading cluster: multinode-629000
	I0916 03:54:06.900104    4371 notify.go:220] Checking for updates...
	I0916 03:54:06.900417    4371 config.go:182] Loaded profile config "multinode-629000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 03:54:06.900431    4371 status.go:255] checking status of multinode-629000 ...
	I0916 03:54:06.900851    4371 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:54:06.900890    4371 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:54:06.909857    4371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52220
	I0916 03:54:06.910180    4371 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:54:06.910574    4371 main.go:141] libmachine: Using API Version  1
	I0916 03:54:06.910588    4371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:54:06.910856    4371 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:54:06.910983    4371 main.go:141] libmachine: (multinode-629000) Calling .GetState
	I0916 03:54:06.911068    4371 main.go:141] libmachine: (multinode-629000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:54:06.911129    4371 main.go:141] libmachine: (multinode-629000) DBG | hyperkit pid from json: 4203
	I0916 03:54:06.912049    4371 main.go:141] libmachine: (multinode-629000) DBG | hyperkit pid 4203 missing from process table
	I0916 03:54:06.912096    4371 status.go:330] multinode-629000 host status = "Stopped" (err=<nil>)
	I0916 03:54:06.912109    4371 status.go:343] host is not running, skipping remaining checks
	I0916 03:54:06.912114    4371 status.go:257] multinode-629000 status: &{Name:multinode-629000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 03:54:06.912138    4371 status.go:255] checking status of multinode-629000-m02 ...
	I0916 03:54:06.912412    4371 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0916 03:54:06.912438    4371 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0916 03:54:06.920850    4371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52222
	I0916 03:54:06.921186    4371 main.go:141] libmachine: () Calling .GetVersion
	I0916 03:54:06.921549    4371 main.go:141] libmachine: Using API Version  1
	I0916 03:54:06.921582    4371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 03:54:06.921834    4371 main.go:141] libmachine: () Calling .GetMachineName
	I0916 03:54:06.921981    4371 main.go:141] libmachine: (multinode-629000-m02) Calling .GetState
	I0916 03:54:06.922083    4371 main.go:141] libmachine: (multinode-629000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0916 03:54:06.922159    4371 main.go:141] libmachine: (multinode-629000-m02) DBG | hyperkit pid from json: 4232
	I0916 03:54:06.923066    4371 main.go:141] libmachine: (multinode-629000-m02) DBG | hyperkit pid 4232 missing from process table
	I0916 03:54:06.923088    4371 status.go:330] multinode-629000-m02 host status = "Stopped" (err=<nil>)
	I0916 03:54:06.923097    4371 status.go:343] host is not running, skipping remaining checks
	I0916 03:54:06.923102    4371 status.go:257] multinode-629000-m02 status: &{Name:multinode-629000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-629000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-629000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m24.953762031s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-629000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-629000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-629000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-629000-m02 --driver=hyperkit : exit status 14 (418.888388ms)

                                                
                                                
-- stdout --
	* [multinode-629000-m02] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-629000-m02' is duplicated with machine name 'multinode-629000-m02' in profile 'multinode-629000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-629000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-629000-m03 --driver=hyperkit : (37.236350614s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-629000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-629000: exit status 80 (270.839289ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-629000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-629000-m03 already exists in multinode-629000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-629000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-629000-m03: (3.43159215s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.41s)

                                                
                                    
x
+
TestPreload (136.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-939000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-939000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m14.718490798s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-939000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-939000 image pull gcr.io/k8s-minikube/busybox: (1.409802405s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-939000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-939000: (8.360247499s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-939000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0916 03:57:58.539087    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-939000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (47.024387593s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-939000 image list
helpers_test.go:175: Cleaning up "test-preload-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-939000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-939000: (5.243300319s)
--- PASS: TestPreload (136.91s)

                                                
                                    
x
+
TestSkaffold (112.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1695287257 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1695287257 version: (1.72631207s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-388000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-388000 --memory=2600 --driver=hyperkit : (36.21901375s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1695287257 run --minikube-profile skaffold-388000 --kube-context skaffold-388000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1695287257 run --minikube-profile skaffold-388000 --kube-context skaffold-388000 --status-check=true --port-forward=false --interactive=false: (57.394757218s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-86b875f86f-22vn8" [0c14e3ca-f0bc-4c90-8e06-91e3c5578b12] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.035032027s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5cd4fdf9d-4nhpp" [6eb9caea-8b3a-43c3-a189-325ee081292a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003882312s
helpers_test.go:175: Cleaning up "skaffold-388000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-388000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-388000: (5.245472515s)
--- PASS: TestSkaffold (112.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1896119127 start -p running-upgrade-755000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1896119127 start -p running-upgrade-755000 --memory=2200 --vm-driver=hyperkit : (1m1.318374471s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-755000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-755000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (22.863761438s)
helpers_test.go:175: Cleaning up "running-upgrade-755000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-755000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-755000: (5.255599676s)
--- PASS: TestRunningBinaryUpgrade (90.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (1328.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (53.674325288s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-984000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-984000: (8.384117624s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-984000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-984000 status --format={{.Host}}: exit status 7 (67.920952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E0916 04:22:39.741799    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:22:41.638054    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:22:58.558398    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:24:02.816180    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:27:39.745060    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:27:58.564233    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (10m30.215820601s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-984000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (482.007479ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-984000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-984000
	    minikube start -p kubernetes-upgrade-984000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9840002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-984000 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit 
E0916 04:32:39.748135    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:32:58.567001    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:37:39.747327    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:37:58.565855    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-984000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperkit : (10m30.147531606s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-984000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-984000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-984000: (5.251603016s)
--- PASS: TestKubernetesUpgrade (1328.27s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19651
- KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2474990758/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2474990758/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2474990758/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2474990758/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.13s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.5s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19651
- KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2236025737/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2236025737/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2236025737/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2236025737/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.101401026 start -p stopped-upgrade-802000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.101401026 start -p stopped-upgrade-802000 --memory=2200 --vm-driver=hyperkit : (39.262516971s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.101401026 -p stopped-upgrade-802000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.101401026 -p stopped-upgrade-802000 stop: (8.227494429s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-802000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0916 04:40:42.824243    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-802000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m12.273684449s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-802000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-802000: (2.477279296s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-492000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-492000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (513.539294ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-492000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19651
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19651-1087/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19651-1087/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-492000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-492000 --driver=hyperkit : (1m14.262049346s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-492000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m28.184655622s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-492000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-492000 --no-kubernetes --driver=hyperkit : (15.05546304s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-492000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-492000 status -o json: exit status 2 (147.480522ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-492000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-492000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-492000: (2.601931829s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (18.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-492000 --no-kubernetes --driver=hyperkit 
E0916 04:42:39.747413    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-492000 --no-kubernetes --driver=hyperkit : (18.873515377s)
--- PASS: TestNoKubernetes/serial/Start (18.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (126.174787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-492000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-492000: (2.380928965s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-492000 --driver=hyperkit 
E0916 04:42:58.565910    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-492000 --driver=hyperkit : (19.515497024s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m7pcc" [53af4c3a-edb9-4843-94be-271c487c35b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m7pcc" [53af4c3a-edb9-4843-94be-271c487c35b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003305468s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-492000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (131.712227ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (50.985213954s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (167.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (2m47.315441179s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (167.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t9kx4" [963fc4f4-5597-47c4-8fe8-478f1a6a85aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00362586s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c8qhw" [dd44763b-15c1-48de-9b57-a0bc35136bb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c8qhw" [dd44763b-15c1-48de-9b57-a0bc35136bb9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005147684s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m1.679790588s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2cm7w" [ef0a411f-beab-48bc-abe5-15f87244e896] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00459911s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bptgn" [ab4faec3-58fd-421b-b14e-2bfeea3973f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bptgn" [ab4faec3-58fd-421b-b14e-2bfeea3973f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.002511934s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (52.270214823s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m87td" [0be9600b-53b9-4fc8-a029-2490e0fd4315] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m87td" [0be9600b-53b9-4fc8-a029-2490e0fd4315] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003214737s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (49.074625816s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x2th8" [5937cdb9-6b7d-4852-a689-e16d3187afe9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x2th8" [5937cdb9-6b7d-4852-a689-e16d3187afe9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003393153s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (51.828223331s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fn5bj" [b3629085-306e-4b21-aaab-49882c0ee1ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fn5bj" [b3629085-306e-4b21-aaab-49882c0ee1ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.00492787s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0916 04:48:13.787650    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:48:16.350596    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:48:21.473645    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:48:31.715171    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m4.692739373s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-466kk" [f3bf5100-30a1-4400-8626-42dcdbcfb96d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-466kk" [f3bf5100-30a1-4400-8626-42dcdbcfb96d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002832147s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0916 04:49:10.129827    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:10.136496    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:10.148318    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:10.170555    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:10.212115    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:10.295091    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:10.456357    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:10.778617    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:11.420686    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:12.701943    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:15.264961    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-838000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m21.319295705s)
--- PASS: TestNetworkPlugins/group/false/Start (81.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zzn78" [1e97965b-e957-4a08-9f6d-9e7ee823f9a3] Running
E0916 04:49:20.386366    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00433633s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6s85f" [34d02dab-a4f8-4aa7-9944-53015e54beac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6s85f" [34d02dab-a4f8-4aa7-9944-53015e54beac] Running
E0916 04:49:30.629298    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:49:33.160097    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.002936313s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-960000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-960000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m43.955251584s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-838000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-838000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qcmp7" [46298f48-77fd-46c2-9caa-3cb85aa03640] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 04:50:32.075086    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-qcmp7" [46298f48-77fd-46c2-9caa-3cb85aa03640] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003657925s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-838000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-838000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)
E0916 05:04:34.319877    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (82.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-328000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.1
E0916 04:50:57.078101    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:07.320062    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:27.803007    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.126952    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.133399    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.145786    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.167766    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.209741    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.291173    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.452838    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:28.774987    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:29.416379    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:30.698505    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:33.261586    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:38.383396    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:48.626229    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:51:53.998345    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:08.764703    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:09.108294    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:15.537641    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:15.544139    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:15.555766    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:15.578884    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:15.621013    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:15.702940    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:15.864177    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:16.185878    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:16.827553    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-328000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.1: (1m22.444985129s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (82.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [889652ea-7924-4338-ac18-bd1e573b2fd6] Pending
E0916 04:52:18.109749    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [889652ea-7924-4338-ac18-bd1e573b2fd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0916 04:52:20.672119    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [889652ea-7924-4338-ac18-bd1e573b2fd6] Running
E0916 04:52:25.795190    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003459763s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-328000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-328000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-328000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-328000 --alsologtostderr -v=3
E0916 04:52:36.037574    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-328000 --alsologtostderr -v=3: (8.390247793s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-328000 -n no-preload-328000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-328000 -n no-preload-328000: exit status 7 (68.4948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-328000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-328000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-328000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.1: (4m49.724232735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-328000 -n no-preload-328000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-960000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a62ddff-cc8e-4a15-ada5-b2ed91ee02b2] Pending
helpers_test.go:344: "busybox" [5a62ddff-cc8e-4a15-ada5-b2ed91ee02b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0916 04:52:39.750217    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5a62ddff-cc8e-4a15-ada5-b2ed91ee02b2] Running
E0916 04:52:45.063034    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:45.069932    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:45.081987    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:45.103287    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:45.144792    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:45.226189    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:45.387765    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:45.709359    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003277373s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-960000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-960000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0916 04:52:46.351603    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-960000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-960000 --alsologtostderr -v=3
E0916 04:52:47.632971    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:50.070785    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:50.195523    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:55.319205    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-960000 --alsologtostderr -v=3: (8.417703028s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-960000 -n old-k8s-version-960000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-960000 -n old-k8s-version-960000: exit status 7 (68.822423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-960000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (403.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-960000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0916 04:52:56.519004    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:52:58.568744    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:05.561194    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:11.216596    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:26.044660    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:30.686410    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.213660    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.220115    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.231713    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.254278    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.296751    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.378171    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.540378    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:36.862066    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:37.480770    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:37.504705    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:38.786954    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:38.926522    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:41.350446    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:46.472161    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:53:56.715528    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:07.006949    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:10.130684    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:11.994274    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.197560    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.275421    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.283236    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.296851    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.319584    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.360860    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.443265    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.605823    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:17.927106    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:18.568504    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:19.850528    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:22.412894    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:27.534964    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:37.777282    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:37.842026    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:58.159517    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:58.258889    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:54:59.403248    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:26.698405    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:26.705023    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:26.718403    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:26.740221    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:26.781561    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:26.863088    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:27.026537    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:27.348986    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:27.990475    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:28.930104    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:29.272487    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:31.835231    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:36.957590    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:39.222206    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:46.820160    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:55:47.201328    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:56:01.652286    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:56:07.683002    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:56:14.529060    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kindnet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:56:20.081603    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:56:28.128045    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:56:48.646682    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:56:55.838318    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/enable-default-cni-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:57:01.144124    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:57:15.539959    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:57:22.829551    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-960000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m43.247683743s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-960000 -n old-k8s-version-960000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (403.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l2gcn" [22d5e40a-fd12-4429-86e9-f2c6d9ef03b3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005276046s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l2gcn" [22d5e40a-fd12-4429-86e9-f2c6d9ef03b3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004729962s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-328000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-328000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-328000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-328000 -n no-preload-328000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-328000 -n no-preload-328000: exit status 2 (168.678948ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-328000 -n no-preload-328000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-328000 -n no-preload-328000: exit status 2 (166.598748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-328000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-328000 -n no-preload-328000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-328000 -n no-preload-328000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.1
E0916 04:57:45.065150    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:57:58.570577    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:58:10.568468    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/false-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:58:11.218038    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:58:12.771968    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 04:58:36.215466    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.1: (1m18.804515832s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-877000 create -f testdata/busybox.yaml
E0916 04:59:03.925505    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e2cf466-89c8-4e9b-8004-e36e68950801] Pending
helpers_test.go:344: "busybox" [6e2cf466-89c8-4e9b-8004-e36e68950801] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e2cf466-89c8-4e9b-8004-e36e68950801] Running
E0916 04:59:10.131749    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003655609s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-877000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-877000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-877000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-877000 --alsologtostderr -v=3
E0916 04:59:17.276379    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-877000 --alsologtostderr -v=3: (8.420089986s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-877000 -n embed-certs-877000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-877000 -n embed-certs-877000: exit status 7 (68.097974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-877000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (293.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.31.1: (4m53.076149576s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-877000 -n embed-certs-877000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (293.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-msrdx" [7d0ebb59-aef2-4342-8e14-a5b64223ad65] Running
E0916 04:59:44.986361    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002400052s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-msrdx" [7d0ebb59-aef2-4342-8e14-a5b64223ad65] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002272424s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-960000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-960000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-960000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-960000 -n old-k8s-version-960000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-960000 -n old-k8s-version-960000: exit status 2 (179.010901ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-960000 -n old-k8s-version-960000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-960000 -n old-k8s-version-960000: exit status 2 (155.663658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-960000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-960000 -n old-k8s-version-960000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-960000 -n old-k8s-version-960000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-505000 --alsologtostderr -v=3
E0916 05:02:15.568810    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/bridge-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:17.865272    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:17.872216    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:17.883976    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:17.906294    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:17.948867    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:18.032253    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:18.194057    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:18.516295    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:19.158238    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:20.440418    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:23.003797    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-505000 --alsologtostderr -v=3: (8.405441928s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000: exit status 7 (68.963405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-505000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-505000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.1
E0916 05:02:28.126317    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.255067    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.261583    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.273823    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.296054    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.337706    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.420397    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.582256    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:37.904480    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:38.368240    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:38.545895    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:39.780626    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/skaffold-388000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:39.827807    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:42.391423    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:45.092937    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/kubenet-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:47.514165    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:57.756877    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:58.598054    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:02:58.850016    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-505000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.1: (45.845871646s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5h6rd" [66a3a9ec-b3ae-43e5-93ef-f4d72594a748] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0916 05:03:11.247203    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/auto-838000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5h6rd" [66a3a9ec-b3ae-43e5-93ef-f4d72594a748] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.00410381s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5h6rd" [66a3a9ec-b3ae-43e5-93ef-f4d72594a748] Running
E0916 05:03:18.240266    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004275471s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-505000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-505000 image list --format=json
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-505000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000: exit status 2 (160.16428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000: exit status 2 (158.496176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-505000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-505000 -n default-k8s-diff-port-505000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-492000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.1
E0916 05:03:36.244926    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/custom-flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:03:39.812344    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/no-preload-328000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:03:59.202255    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/old-k8s-version-960000/client.crt: no such file or directory" logger="UnhandledError"
E0916 05:04:10.163047    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/flannel-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-492000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.1: (42.561468491s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-492000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-492000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-492000 --alsologtostderr -v=3: (8.477037705s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xn54b" [25aa4a5b-0d71-4e17-ba3b-5d03d674d629] Running
E0916 05:04:17.306800    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/calico-838000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003837053s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xn54b" [25aa4a5b-0d71-4e17-ba3b-5d03d674d629] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003023553s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-492000 -n newest-cni-492000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-492000 -n newest-cni-492000: exit status 7 (68.79731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-492000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-492000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-492000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.1: (29.045844978s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-492000 -n newest-cni-492000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-877000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-877000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-877000 -n embed-certs-877000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-877000 -n embed-certs-877000: exit status 2 (161.957781ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-877000 -n embed-certs-877000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-877000 -n embed-certs-877000: exit status 2 (164.534586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-877000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-877000 -n embed-certs-877000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-877000 -n embed-certs-877000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-492000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-492000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-492000 -n newest-cni-492000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-492000 -n newest-cni-492000: exit status 2 (161.195261ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-492000 -n newest-cni-492000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-492000 -n newest-cni-492000: exit status 2 (154.751533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-492000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-492000 -n newest-cni-492000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-492000 -n newest-cni-492000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.88s)

                                                
                                    

Test skip (19/312)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0916 04:02:58.542267    1649 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19651-1087/.minikube/profiles/functional-318000/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: cilium-838000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-838000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-838000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-838000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-838000"

                                                
                                                
----------------------- debugLogs end: cilium-838000 [took: 5.652218358s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-838000
--- SKIP: TestNetworkPlugins/group/cilium (5.86s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-639000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-639000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard